Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-staging-parallel: broken test run #37988

Closed
k8s-github-robot opened this issue Dec 2, 2016 · 51 comments
Closed

ci-kubernetes-e2e-gci-gke-staging-parallel: broken test run #37988

k8s-github-robot opened this issue Dec 2, 2016 · 51 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.
Milestone

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/976/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 2, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1163/

Multiple broken tests:

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  5 03:05:31.814: All nodes should be ready after test, the server has asked for the client to provide credentials (get nodes)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:418

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.StatusError | 0xc820d75080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get jobs.extensions foo)",
            Reason: "Unauthorized",
            Details: {
                Name: "foo",
                Group: "extensions",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get jobs.extensions foo)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82066e500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-replicaset-7t3qd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replicaset-7t3qd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replicaset-7t3qd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32023

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820f77880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-r53c9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-r53c9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-r53c9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1100
Dec  5 03:08:34.498: Failed to get server version: Unable to get server version: an error on the server ("Internal Server Error: \"/version\"") has prevented the request from succeeding
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:413

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820f24c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-s89x7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-s89x7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-s89x7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820310c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-var-expansion-50mpf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-50mpf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-50mpf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28503

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8203b4f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-v1job-t6xyp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-t6xyp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-t6xyp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29657

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8205ae500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-qwftj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-qwftj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-qwftj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35590

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.StatusError | 0xc820f60000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get replicationControllers rc-light)",
            Reason: "Unauthorized",
            Details: {
                Name: "rc-light",
                Group: "",
                Kind: "replicationControllers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get replicationControllers rc-light)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:249

Issues about this test specifically: #27443 #27835 #28900 #32512

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1219/

Multiple broken tests:

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
Expected error:
    <*errors.errorString | 0xc8201a2af0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:108

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc8200e77b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47
Expected error:
    <*errors.errorString | 0xc8200e57b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:109

Issues about this test specifically: #32023

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc820d062a0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:393

Issues about this test specifically: #27443 #27835 #28900 #32512

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:195
Expected success, but got an error:
    <*errors.errorString | 0xc8201b4760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:194

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc820dffc60>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345

Issues about this test specifically: #27196 #28998 #32403 #33341

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1436/

Multiple broken tests:

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 19:19:48.230: Couldn't delete ns: "e2e-tests-services-lzeac": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-services-lzeac/replicasets\"") has prevented the request from succeeding (get replicasets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-services-lzeac/replicasets\\\"\") has prevented the request from succeeding (get replicasets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820966f00), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82085c700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-exgpm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-exgpm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-exgpm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
Expected error:
    <*errors.errorString | 0xc8208819b0>: {
        s: "failed to get logs from pod-78d8ffc2-bdbd-11e6-9a09-0242ac11000a for test-container: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-r7oam/pods/pod-78d8ffc2-bdbd-11e6-9a09-0242ac11000a/log?container=test-container&previous=false\\\"\") has prevented the request from succeeding (get pods pod-78d8ffc2-bdbd-11e6-9a09-0242ac11000a)",
    }
    failed to get logs from pod-78d8ffc2-bdbd-11e6-9a09-0242ac11000a for test-container: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-r7oam/pods/pod-78d8ffc2-bdbd-11e6-9a09-0242ac11000a/log?container=test-container&previous=false\"") has prevented the request from succeeding (get pods pod-78d8ffc2-bdbd-11e6-9a09-0242ac11000a)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #37439

Failed: [k8s.io] Downward API volume should provide container's cpu request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 19:21:11.432: Couldn't delete ns: "e2e-tests-downward-api-hque8": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-hque8/resourcequotas\"") has prevented the request from succeeding (get resourcequotas) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-hque8/resourcequotas\\\"\") has prevented the request from succeeding (get resourcequotas)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820bdaeb0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 19:13:49.157: Couldn't delete ns: "e2e-tests-configmap-jv0z1": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-configmap-jv0z1/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-configmap-jv0z1/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82093ef00), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:52
Expected error:
    <*errors.StatusError | 0xc8204e3600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-e2e-kubelet-etc-hosts-f98ga/pods/test-pod\\\"\") has prevented the request from succeeding (get pods test-pod)",
            Reason: "InternalError",
            Details: {
                Name: "test-pod",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-e2e-kubelet-etc-hosts-f98ga/pods/test-pod\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-e2e-kubelet-etc-hosts-f98ga/pods/test-pod\"") has prevented the request from succeeding (get pods test-pod)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:60

Issues about this test specifically: #27023 #34604

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 19:19:51.371: Couldn't delete ns: "e2e-tests-horizontal-pod-autoscaling-k3k8w": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-horizontal-pod-autoscaling-k3k8w/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-horizontal-pod-autoscaling-k3k8w/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820628690), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 19:13:48.675: Couldn't delete ns: "e2e-tests-secrets-s471y": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-secrets-s471y/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-secrets-s471y/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a2ea00), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1460/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820bf9880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-gms8x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-gms8x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-gms8x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35473

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 03:24:49.850: Couldn't delete ns: "e2e-tests-job-qqnxg": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-job-qqnxg\"") has prevented the request from succeeding (delete namespaces e2e-tests-job-qqnxg) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-job-qqnxg\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-job-qqnxg)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a14550), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 03:25:00.749: Couldn't delete ns: "e2e-tests-emptydir-ktmaq": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-emptydir-ktmaq/jobs\"") has prevented the request from succeeding (get jobs.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-emptydir-ktmaq/jobs\\\"\") has prevented the request from succeeding (get jobs.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8209c4f00), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 03:16:28.031: Couldn't delete ns: "e2e-tests-cadvisor-6ez2d": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-cadvisor-6ez2d/replicationcontrollers\"") has prevented the request from succeeding (get replicationcontrollers) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-cadvisor-6ez2d/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationcontrollers)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a5c1e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32371

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:220
Dec  9 03:16:25.601: Pod did not start running: an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-ywaiw/pods?fieldSelector=metadata.name%3Dpfpod\"") has prevented the request from succeeding (get pods)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:184

Issues about this test specifically: #26955

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:882
Dec  9 03:17:30.611: Created service with conflicting NodePort: &TypeMeta{Kind:,APIVersion:,}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:870

Issues about this test specifically: #31575 #32756

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1468/

Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc8204f2a90>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-229d8ba9-be17-11e6-b2ac-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-229d8ba9-be17-11e6-b2ac-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #31400

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:83
Expected error:
    <*errors.errorString | 0xc82027c280>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:960

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:847
Dec  9 06:00:34.214: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:202

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1019
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.214.189 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-dk6zg] []  <nil> Created e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee\nScaling up e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc820a60ec0 exit status 1 <nil> true [0xc82090a4e0 0xc82090a4f8 0xc82090a510] [0xc82090a4e0 0xc82090a4f8 0xc82090a510] [0xc82090a4f0 0xc82090a508] [0xafa830 0xafa830] 0xc820c65620}:\nCommand stdout:\nCreated e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee\nScaling up e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.214.189 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-dk6zg] []  <nil> Created e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee
    Scaling up e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc820a60ec0 exit status 1 <nil> true [0xc82090a4e0 0xc82090a4f8 0xc82090a510] [0xc82090a4e0 0xc82090a4f8 0xc82090a510] [0xc82090a4f0 0xc82090a508] [0xafa830 0xafa830] 0xc820c65620}:
    Command stdout:
    Created e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee
    Scaling up e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-2ae88081b677b0f79f7020b8eb53a7ee up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:167

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:333
Expected error:
    <*errors.StatusError | 0xc8212d6a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-services-dydfl/services/service1\\\"\") has prevented the request from succeeding (get services service1)",
            Reason: "InternalError",
            Details: {
                Name: "service1",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-dydfl/services/service1\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-dydfl/services/service1\"") has prevented the request from succeeding (get services service1)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:292

Issues about this test specifically: #26128 #26685 #33408 #36298

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1469/

Multiple broken tests:

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 06:26:02.672: Couldn't delete ns: "e2e-tests-containers-kkd57": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-containers-kkd57/events\"") has prevented the request from succeeding (get events) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-containers-kkd57/events\\\"\") has prevented the request from succeeding (get events)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82095edc0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29467

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:344
Dec  9 06:26:03.455: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154

Issues about this test specifically: #27673

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 06:26:02.445: Couldn't delete ns: "e2e-tests-pods-9gvoe": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-9gvoe/endpoints\"") has prevented the request from succeeding (get endpoints) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-9gvoe/endpoints\\\"\") has prevented the request from succeeding (get endpoints)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820956c30), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #38308

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 06:26:02.436: Couldn't delete ns: "e2e-tests-emptydir-4g246": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-4g246/events\"") has prevented the request from succeeding (get events) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-4g246/events\\\"\") has prevented the request from succeeding (get events)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82089f770), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1472/

Multiple broken tests:

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:70
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc82089d580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-var-expansion-u12h5/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-var-expansion-u12h5/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-var-expansion-u12h5/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Issues about this test specifically: #29461

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 07:30:06.324: Couldn't delete ns: "e2e-tests-job-ghs9a": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-job-ghs9a\"") has prevented the request from succeeding (delete namespaces e2e-tests-job-ghs9a) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-job-ghs9a\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-job-ghs9a)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8208ed770), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 07:30:44.350: Couldn't delete ns: "e2e-tests-kubernetes-dashboard-ir6uq": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-kubernetes-dashboard-ir6uq/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-kubernetes-dashboard-ir6uq/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820bd9a90), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #26191

Failed: [k8s.io] Secrets should be consumable from pods in volume with Mode set in the item [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:215
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc8207c6200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-secrets-hdbf9/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-secrets-hdbf9/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-secrets-hdbf9/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Issues about this test specifically: #31969

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:828
Expected error:
    <*errors.StatusError | 0xc820beb280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-services-25rxr/services\\\"\") has prevented the request from succeeding (post services)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-25rxr/services\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-25rxr/services\"") has prevented the request from succeeding (post services)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:820

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1476/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 08:50:10.797: Couldn't delete ns: "e2e-tests-kubectl-5b2yu": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-kubectl-5b2yu/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-kubectl-5b2yu/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8208359f0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82087ae80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-tiw3o/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-tiw3o/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-tiw3o/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30263

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820840d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-ds0qb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-ds0qb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-ds0qb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82076ae80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-1b9j6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-1b9j6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-1b9j6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29513

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 08:53:25.911: Couldn't delete ns: "e2e-tests-kubectl-5pn0c": unable to retrieve the complete list of server APIs: autoscaling/v1: an error on the server ("Internal Server Error: \"/apis/autoscaling/v1\"") has prevented the request from succeeding (&discovery.ErrGroupDiscoveryFailed{Groups:map[unversioned.GroupVersion]error{unversioned.GroupVersion{Group:"autoscaling", Version:"v1"}:(*errors.StatusError)(0xc820f59080)}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820943780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-monitoring-ytvzw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-monitoring-ytvzw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-monitoring-ytvzw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 08:49:57.378: Couldn't delete ns: "e2e-tests-dns-8ldoq": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-dns-8ldoq/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-dns-8ldoq/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a090e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Mesos schedules pods annotated with roles on correct slaves {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820188700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-si507/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-si507/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-si507/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:82
Expected error:
    <*errors.StatusError | 0xc820254580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-v1job-rtvv9/jobs\\\"\") has prevented the request from succeeding (post jobs.batch)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "batch",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-v1job-rtvv9/jobs\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-v1job-rtvv9/jobs\"") has prevented the request from succeeding (post jobs.batch)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:77

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:37
Dec  9 08:42:11.650: unable to create test configMap : an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-jeyn0/configmaps\"") has prevented the request from succeeding (post configmaps)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:321

Issues about this test specifically: #29052

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82084c200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-job-4dy49/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-4dy49/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-4dy49/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82055a600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-resourcequota-9ou3m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-9ou3m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-9ou3m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34372

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820cff480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-b0615/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-b0615/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-b0615/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Generated release_1_2 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 08:50:19.347: Couldn't delete ns: "e2e-tests-clientset-9goem": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-clientset-9goem/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-clientset-9goem/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8203a1a40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32043 #35580

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820672d80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-1obc4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-1obc4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-1obc4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34226

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:452
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc820c52b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-shp4y/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-shp4y/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-shp4y/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Issues about this test specifically: #33985

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820d38000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-bv8bn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-bv8bn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-bv8bn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29197 #36289 #36598

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82096e000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-uv7mq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-uv7mq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-uv7mq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820c37480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-port-forwarding-o067x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-o067x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-o067x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820c29000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-q7o45/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-q7o45/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-q7o45/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27232

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:983
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.194.165 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-rc --image=gcr.io/google_containers/nginx-slim:0.7 --generator=run/v1 --namespace=e2e-tests-kubectl-60838] []  <nil>  Error from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-60838/replicationcontrollers\\\"\") has prevented the request from succeeding (post replicationcontrollers)\n [] <nil> 0xc8208186c0 exit status 1 <nil> true [0xc820eba590 0xc820eba5b0 0xc820eba5c8] [0xc820eba590 0xc820eba5b0 0xc820eba5c8] [0xc820eba5a8 0xc820eba5c0] [0xafa830 0xafa830] 0xc820c93740}:\nCommand stdout:\n\nstderr:\nError from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-60838/replicationcontrollers\\\"\") has prevented the request from succeeding (post replicationcontrollers)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.194.165 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-rc --image=gcr.io/google_containers/nginx-slim:0.7 --generator=run/v1 --namespace=e2e-tests-kubectl-60838] []  <nil>  Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-60838/replicationcontrollers\"") has prevented the request from succeeding (post replicationcontrollers)
     [] <nil> 0xc8208186c0 exit status 1 <nil> true [0xc820eba590 0xc820eba5b0 0xc820eba5c8] [0xc820eba590 0xc820eba5b0 0xc820eba5c8] [0xc820eba5a8 0xc820eba5c0] [0xafa830 0xafa830] 0xc820c93740}:
    Command stdout:
    
    stderr:
    Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-60838/replicationcontrollers\"") has prevented the request from succeeding (post replicationcontrollers)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Mesos applies slave attributes as labels {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8207e6580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-401q6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-401q6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-401q6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28359

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc820753d20>: {
        s: "error while stopping RC: rc-light: Scaling the resource failed with: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-cfllt/replicationcontrollers/rc-light\\\"\") has prevented the request from succeeding (put replicationControllers rc-light); Current resource version 7739",
    }
    error while stopping RC: rc-light: Scaling the resource failed with: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-cfllt/replicationcontrollers/rc-light\"") has prevented the request from succeeding (put replicationControllers rc-light); Current resource version 7739
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:305

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8202a6700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-zdhal/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-zdhal/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-zdhal/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29050

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:929
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.194.165 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx-slim:0.7 --namespace=e2e-tests-kubectl-75zwp] []  <nil>  error: failed to discover supported resources: an error on the server (\"Internal Server Error: \\\"/apis/batch/v2alpha1\\\"\") has prevented the request from succeeding\n [] <nil> 0xc8206419c0 exit status 1 <nil> true [0xc820121418 0xc820121430 0xc820121448] [0xc820121418 0xc820121430 0xc820121448] [0xc820121428 0xc820121440] [0xafa830 0xafa830] 0xc820cfa8a0}:\nCommand stdout:\n\nstderr:\nerror: failed to discover supported resources: an error on the server (\"Internal Server Error: \\\"/apis/batch/v2alpha1\\\"\") has prevented the request from succeeding\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.194.165 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx-slim:0.7 --namespace=e2e-tests-kubectl-75zwp] []  <nil>  error: failed to discover supported resources: an error on the server ("Internal Server Error: \"/apis/batch/v2alpha1\"") has prevented the request from succeeding
     [] <nil> 0xc8206419c0 exit status 1 <nil> true [0xc820121418 0xc820121430 0xc820121448] [0xc820121418 0xc820121430 0xc820121448] [0xc820121428 0xc820121440] [0xafa830 0xafa830] 0xc820cfa8a0}:
    Command stdout:
    
    stderr:
    error: failed to discover supported resources: an error on the server ("Internal Server Error: \"/apis/batch/v2alpha1\"") has prevented the request from succeeding
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82053db00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-lkk58/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-lkk58/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-lkk58/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc8203f7100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-containers-3sikj/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-containers-3sikj/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-containers-3sikj/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Issues about this test specifically: #36706

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc820275c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-2asj6/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-2asj6/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-2asj6/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82025d400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-6sqmj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-6sqmj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-6sqmj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820807b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-r67nr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-r67nr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-r67nr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27195

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820346e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-t261b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-t261b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-t261b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31085 #34207 #37097

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1478/

Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 18 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-cd257a61  n1-standard-2               2016-12-09T09:21:53.829-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-cd257a61-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
+gke-bootstrap-e2e-default-pool-cd257a61-bsux  us-central1-f  n1-standard-2               10.240.0.2   35.184.66.68   RUNNING
+gke-bootstrap-e2e-default-pool-cd257a61-hhbk  us-central1-f  n1-standard-2               10.240.0.4   35.184.67.175  RUNNING
+gke-bootstrap-e2e-default-pool-cd257a61-ryn3  us-central1-f  n1-standard-2               10.240.0.3   35.184.74.193  RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-cd257a61-bsux  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-cd257a61-hhbk  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-cd257a61-ryn3  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-ba6fd10a-36565dc9-be34-11e6-bcc3-42010af0002d  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-cd257a61-hhbk  1000
+gke-bootstrap-e2e-ba6fd10a-36860457-be34-11e6-bcc3-42010af0002d  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-cd257a61-ryn3  1000
+gke-bootstrap-e2e-ba6fd10a-36f480be-be34-11e6-bcc3-42010af0002d  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-cd257a61-bsux  1000
+gke-bootstrap-e2e-ba6fd10a-all  bootstrap-e2e  10.72.0.0/14        tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-ba6fd10a-ssh  bootstrap-e2e  104.198.171.147/32  tcp:22                                  gke-bootstrap-e2e-ba6fd10a-node
+gke-bootstrap-e2e-ba6fd10a-vms  bootstrap-e2e  10.240.0.0/16       tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-ba6fd10a-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 1h20m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 1h20m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 1h20m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1487/

Multiple broken tests:

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:59
Dec  9 13:59:36.277: unable to delete configMap configmap-test-volume-map-bde54ec9-be5a-11e6-95f9-0242ac11000b: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-q63gi/configmaps/configmap-test-volume-map-bde54ec9-be5a-11e6-95f9-0242ac11000b\"") has prevented the request from succeeding (delete configmaps configmap-test-volume-map-bde54ec9-be5a-11e6-95f9-0242ac11000b)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:400

Issues about this test specifically: #35790

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:198
Expected error:
    <*errors.StatusError | 0xc820cad900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-g6jpg/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-g6jpg/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-g6jpg/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:155

Issues about this test specifically: #38516

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc820e00d30>: {
        s: "error while stopping RC: rc-light-ctrl: Scaling the resource failed with: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-luw97/replicationcontrollers/rc-light-ctrl\\\"\") has prevented the request from succeeding (put replicationControllers rc-light-ctrl); Current resource version 8655",
    }
    error while stopping RC: rc-light-ctrl: Scaling the resource failed with: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-luw97/replicationcontrollers/rc-light-ctrl\"") has prevented the request from succeeding (put replicationControllers rc-light-ctrl); Current resource version 8655
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:307

Issues about this test specifically: #27443 #27835 #28900 #32512

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.StatusError | 0xc820cad380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-76k1b/pods?fieldSelector=metadata.name%3Dnetserver-0\\\"\") has prevented the request from succeeding (get pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-76k1b/pods?fieldSelector=metadata.name%3Dnetserver-0\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-76k1b/pods?fieldSelector=metadata.name%3Dnetserver-0\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32375

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 13:59:38.956: Couldn't delete ns: "e2e-tests-port-forwarding-wrjkg": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-port-forwarding-wrjkg/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-port-forwarding-wrjkg/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820c5bc20), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27673

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1583/

Multiple broken tests:

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc42039c620>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203fb2d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #32375

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
    <*errors.errorString | 0xc4203c2ed0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #37056

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:240
Expected success, but got an error:
    <*errors.errorString | 0xc4203c2ed0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:232

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
waiting for server pod to start
Expected error:
    <*errors.errorString | 0xc4203a55e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:65

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203ef140>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420a4c630>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203ad780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1586/

Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420b6c460>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420d88730>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:5, Replicas:5, UpdatedReplicas:5, AvailableReplicas:3, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63617066852, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63617066852, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:5, Replicas:5, UpdatedReplicas:5, AvailableReplicas:3, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63617066852, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63617066852, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1180

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:66
Expected error:
    <*errors.errorString | 0xc420412e30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:283

Issues about this test specifically: #37144

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc420412e30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
wait for pod "pod-19f8548b-bfb6-11e6-8933-0242ac110003" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42043cdf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Dec 11 07:34:54.302: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1580

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42043d000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/1912/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.143.104 --kubeconfig=/workspace/.kube/config get pods update-demo-nautilus-irilg -o template --template={{if (exists . \"status\" \"containerStatuses\")}}{{range .status.containerStatuses}}{{if (and (eq .name \"update-demo\") (exists . \"state\" \"running\"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f5l1v] []  <nil>  The connection to the server 104.198.143.104 was refused - did you specify the right host or port?\n [] <nil> 0xc8209e06e0 exit status 1 <nil> true [0xc8200b8cf0 0xc8200b8d68 0xc8200b8dd0] [0xc8200b8cf0 0xc8200b8d68 0xc8200b8dd0] [0xc8200b8d38 0xc8200b8db0] [0xafae20 0xafae20] 0xc820bec1e0}:\nCommand stdout:\n\nstderr:\nThe connection to the server 104.198.143.104 was refused - did you specify the right host or port?\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.143.104 --kubeconfig=/workspace/.kube/config get pods update-demo-nautilus-irilg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f5l1v] []  <nil>  The connection to the server 104.198.143.104 was refused - did you specify the right host or port?
     [] <nil> 0xc8209e06e0 exit status 1 <nil> true [0xc8200b8cf0 0xc8200b8d68 0xc8200b8dd0] [0xc8200b8cf0 0xc8200b8d68 0xc8200b8dd0] [0xc8200b8d38 0xc8200b8db0] [0xafae20 0xafae20] 0xc820bec1e0}:
    Command stdout:
    
    stderr:
    The connection to the server 104.198.143.104 was refused - did you specify the right host or port?
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:83
Expected error:
    <*url.Error | 0xc8209b3bf0>: {
        Op: "Get",
        URL: "https://104.198.143.104/api/v1/namespaces/e2e-tests-deployment-lncq6/pods?labelSelector=name%3Dnginx",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhƏh",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.198.143.104/api/v1/namespaces/e2e-tests-deployment-lncq6/pods?labelSelector=name%3Dnginx: dial tcp 104.198.143.104:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1716

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233
getting pod 
Expected error:
    <*url.Error | 0xc8202489f0>: {
        Op: "Get",
        URL: "https://104.198.143.104/api/v1/namespaces/e2e-tests-container-probe-9eaw4/pods/liveness-http",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhƏh",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.198.143.104/api/v1/namespaces/e2e-tests-container-probe-9eaw4/pods/liveness-http: dial tcp 104.198.143.104:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:350

Issues about this test specifically: #30342 #31350

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/2186/
Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:333
Expected error:
    <*errors.errorString | 0xc8208fd0f0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:295

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
Expected error:
    <*errors.errorString | 0xc82081ca70>: {
        s: "expected container test-container success: gave up waiting for pod 'client-containers-a05b7104-c7d6-11e6-b822-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'client-containers-a05b7104-c7d6-11e6-b822-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #29994

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:175
starting pod liveness-http in namespace e2e-tests-container-probe-se4ti
Expected error:
    <*errors.errorString | 0xc8200e57b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:334

Issues about this test specifically: #38511

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:40
Expected error:
    <*errors.errorString | 0xc8200ef7b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:109

Issues about this test specifically: #30981

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1212
Expected error:
    <*errors.errorString | 0xc820cf8040>: {
        s: "timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.35.221 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-9w5vx run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc82088d7e0 Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\n  [] <nil> 0xc8207ba0e0 <nil> <nil> true [0xc82054a0e0 0xc82054a108 0xc82054a120] [0xc82054a0e0 0xc82054a108 0xc82054a120] [0xc82054a0e8 0xc82054a100 0xc82054a110] [0xafacc0 0xafae20 0xafae20] 0xc820ae95c0}:\nCommand stdout:\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false\n\nstderr:\n\n",
    }
    timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.35.221 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-9w5vx run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc82088d7e0 Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
      [] <nil> 0xc8207ba0e0 <nil> <nil> true [0xc82054a0e0 0xc82054a108 0xc82054a120] [0xc82054a0e0 0xc82054a108 0xc82054a120] [0xc82054a0e8 0xc82054a100 0xc82054a110] [0xafacc0 0xafae20 0xafae20] 0xc820ae95c0}:
    Command stdout:
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9w5vx/e2e-test-rm-busybox-job-bg8q7 to be running, status is Pending, pod ready: false
    Wait

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/2188/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:402
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.143.104 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-jg9kx run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never failure-4 --leave-stdin-open -- /bin/sh -c exit 42] []  0xc8209c50a0  Error from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-jg9kx/pods/failure-4\\\"\") has prevented the request from succeeding (get pods failure-4)\n [] <nil> 0xc8209c5900 exit status 1 <nil> true [0xc8202221d0 0xc8202221f8 0xc820222208] [0xc8202221d0 0xc8202221f8 0xc820222208] [0xc8202221d8 0xc8202221f0 0xc820222200] [0xafacc0 0xafae20 0xafae20] 0xc820d1b800}:\nCommand stdout:\n\nstderr:\nError from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-jg9kx/pods/failure-4\\\"\") has prevented the request from succeeding (get pods failure-4)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.143.104 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-jg9kx run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never failure-4 --leave-stdin-open -- /bin/sh -c exit 42] []  0xc8209c50a0  Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-jg9kx/pods/failure-4\"") has prevented the request from succeeding (get pods failure-4)
     [] <nil> 0xc8209c5900 exit status 1 <nil> true [0xc8202221d0 0xc8202221f8 0xc820222208] [0xc8202221d0 0xc8202221f8 0xc820222208] [0xc8202221d8 0xc8202221f0 0xc820222200] [0xafacc0 0xafae20 0xafae20] 0xc820d1b800}:
    Command stdout:
    
    stderr:
    Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-jg9kx/pods/failure-4\"") has prevented the request from succeeding (get pods failure-4)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:664

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:03.030: Couldn't delete ns: "e2e-tests-job-g7hwa": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-job-g7hwa\"") has prevented the request from succeeding (delete namespaces e2e-tests-job-g7hwa) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-job-g7hwa\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-job-g7hwa)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820406a50), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:18.515: Couldn't delete ns: "e2e-tests-port-forwarding-nfu4c": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-port-forwarding-nfu4c/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-port-forwarding-nfu4c/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820b4bcc0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27673

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:15.860: Couldn't delete ns: "e2e-tests-container-probe-plbbw": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-plbbw/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-plbbw/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820ae94f0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #38511

Failed: [k8s.io] Downward API volume should provide container's cpu request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:38:57.227: Couldn't delete ns: "e2e-tests-downward-api-wnzft": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-wnzft/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-wnzft/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820671770), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:17.018: Couldn't delete ns: "e2e-tests-emptydir-qwwks": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-qwwks\"") has prevented the request from succeeding (delete namespaces e2e-tests-emptydir-qwwks) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-qwwks\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-emptydir-qwwks)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82080b6d0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #34226

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:14.245: Couldn't delete ns: "e2e-tests-deployment-fnng3": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-fnng3/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-fnng3/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82057c9b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29828

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:141
Expected error:
    <*errors.StatusError | 0xc820e89080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-v1job-rqkm0/jobs\\\"\") has prevented the request from succeeding (post jobs.batch)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "batch",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-v1job-rqkm0/jobs\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-v1job-rqkm0/jobs\"") has prevented the request from succeeding (post jobs.batch)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:124

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:20.225: Couldn't delete ns: "e2e-tests-deployment-rrblr": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-deployment-rrblr/events\"") has prevented the request from succeeding (get events) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-deployment-rrblr/events\\\"\") has prevented the request from succeeding (get events)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820538b40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:583
Expected error:
    <*errors.StatusError | 0xc820d7c400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-yj24f/pods/pod-logs-websocket-f31f5673-c7de-11e6-a231-0242ac110006\\\"\") has prevented the request from succeeding (get pods pod-logs-websocket-f31f5673-c7de-11e6-a231-0242ac110006)",
            Reason: "InternalError",
            Details: {
                Name: "pod-logs-websocket-f31f5673-c7de-11e6-a231-0242ac110006",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-yj24f/pods/pod-logs-websocket-f31f5673-c7de-11e6-a231-0242ac110006\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-yj24f/pods/pod-logs-websocket-f31f5673-c7de-11e6-a231-0242ac110006\"") has prevented the request from succeeding (get pods pod-logs-websocket-f31f5673-c7de-11e6-a231-0242ac110006)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:60

Issues about this test specifically: #30263

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.StatusError | 0xc820b57000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-ms4o5/replicationcontrollers/rc-light\\\"\") has prevented the request from succeeding (get replicationControllers rc-light)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light",
                Group: "",
                Kind: "replicationControllers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-ms4o5/replicationcontrollers/rc-light\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-ms4o5/replicationcontrollers/rc-light\"") has prevented the request from succeeding (get replicationControllers rc-light)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:249

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:12.378: Couldn't delete ns: "e2e-tests-kubectl-bw5dj": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-bw5dj/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-bw5dj/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820890820), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:08.259: Couldn't delete ns: "e2e-tests-v1job-yzii2": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-v1job-yzii2/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-v1job-yzii2/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82094c0a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:14.723: Couldn't delete ns: "e2e-tests-nettest-kuppo": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-nettest-kuppo/networkpolicies\"") has prevented the request from succeeding (get networkpolicies.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-nettest-kuppo/networkpolicies\\\"\") has prevented the request from succeeding (get networkpolicies.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8205a0e10), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #33631 #33995 #34970

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:13.028: Couldn't delete ns: "e2e-tests-e2e-kubelet-etc-hosts-k8ppu": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-e2e-kubelet-etc-hosts-k8ppu/ingresses\"") has prevented the request from succeeding (get ingresses.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-e2e-kubelet-etc-hosts-k8ppu/ingresses\\\"\") has prevented the request from succeeding (get ingresses.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820b3f860), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27023 #34604 #38550

Failed: [k8s.io] Multi-AZ Clusters should spread the pods of a service across zones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820a8b780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-multi-az-vdl6h/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-multi-az-vdl6h/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-multi-az-vdl6h/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34122

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:11.480: Couldn't delete ns: "e2e-tests-kubectl-typ4e": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-typ4e/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-typ4e/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820c6a230), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:39:00.525: Couldn't delete ns: "e2e-tests-kubectl-vdrit": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-vdrit/serviceaccounts\"") has prevented the request from succeeding (get serviceaccounts) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-vdrit/serviceaccounts\\\"\") has prevented the request from succeeding (get serviceaccounts)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820b3a3c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 21 16:38:54.768: Couldn't delete ns: "e2e-tests-svc-latency-d578h": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-svc-latency-d578h/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-svc-latency-d578h/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82079c4b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #30632

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/2233/
Multiple broken tests:

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:44.599: Couldn't delete ns: "e2e-tests-v1job-lzqax": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-v1job-lzqax/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-v1job-lzqax/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8207bccd0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29657

Failed: [k8s.io] Multi-AZ Clusters should spread the pods of a service across zones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820b8a900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-multi-az-6t1vd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-multi-az-6t1vd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-multi-az-6t1vd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34122

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:333
Expected error:
    <*errors.StatusError | 0xc820ad2280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-services-bwvmn/pods/execpod-8zzdi\\\"\") has prevented the request from succeeding (delete pods execpod-8zzdi)",
            Reason: "InternalError",
            Details: {
                Name: "execpod-8zzdi",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-bwvmn/pods/execpod-8zzdi\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-bwvmn/pods/execpod-8zzdi\"") has prevented the request from succeeding (delete pods execpod-8zzdi)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1443

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82084ac00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-port-forwarding-j7wy6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-j7wy6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-j7wy6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26955

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:96
Expected error:
    <*errors.StatusError | 0xc82017f800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-job-rx5zz/jobs/rand-non-local\\\"\") has prevented the request from succeeding (get jobs.extensions rand-non-local)",
            Reason: "InternalError",
            Details: {
                Name: "rand-non-local",
                Group: "extensions",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-job-rx5zz/jobs/rand-non-local\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-job-rx5zz/jobs/rand-non-local\"") has prevented the request from succeeding (get jobs.extensions rand-non-local)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:95

Issues about this test specifically: #31498 #33896 #35507

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:47.426: Couldn't delete ns: "e2e-tests-services-p2qa1": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-p2qa1/replicationcontrollers\"") has prevented the request from succeeding (get replicationcontrollers) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-services-p2qa1/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationcontrollers)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820bf9400), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:484
Expected error:
    <*errors.StatusError | 0xc820ddf800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-p0l3e/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-p0l3e/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-p0l3e/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:434

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:189
Expected error:
    <kubectl.ScaleError>: {
        FailureType: 1,
        ResourceVersion: "3528",
        ActualError: {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {SelfLink: "", ResourceVersion: ""},
                Status: "Failure",
                Message: "an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-v1job-0ex2i/jobs/foo\\\"\") has prevented the request from succeeding (put jobs.batch foo)",
                Reason: "InternalError",
                Details: {
                    Name: "foo",
                    Group: "batch",
                    Kind: "jobs",
                    Causes: [
                        {
                            Type: "UnexpectedServerResponse",
                            Message: "Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-v1job-0ex2i/jobs/foo\"",
                            Field: "",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 500,
            },
        },
    }
    Scaling the resource failed with: an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-v1job-0ex2i/jobs/foo\"") has prevented the request from succeeding (put jobs.batch foo); Current resource version 3528
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:183

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:54.983: Couldn't delete ns: "e2e-tests-pods-u9ia0": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-pods-u9ia0/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-pods-u9ia0/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820b2f040), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #33985

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:35.106: Couldn't delete ns: "e2e-tests-container-probe-mtp44": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-mtp44/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-mtp44/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820849090), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28084

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:10:10.343: Couldn't delete ns: "e2e-tests-nettest-n4hbd": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-nettest-n4hbd\"") has prevented the request from succeeding (delete namespaces e2e-tests-nettest-n4hbd) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-nettest-n4hbd\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-nettest-n4hbd)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8208f4230), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32375

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:34.052: Couldn't delete ns: "e2e-tests-init-container-z80ou": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-init-container-z80ou/resourcequotas\"") has prevented the request from succeeding (get resourcequotas) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-init-container-z80ou/resourcequotas\\\"\") has prevented the request from succeeding (get resourcequotas)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820b89f90), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #31873

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820d4d880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-kqr88/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-kqr88/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-kqr88/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820a77580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-smk11/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-smk11/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-smk11/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29831

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8204cf700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-svcaccounts-kvr0a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-svcaccounts-kvr0a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-svcaccounts-kvr0a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #37526

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:31.756: Couldn't delete ns: "e2e-tests-prestop-3zues": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-prestop-3zues/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-prestop-3zues/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82055a3c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:56.286: Couldn't delete ns: "e2e-tests-downward-api-ic6gk": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-ic6gk/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-ic6gk/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82079a5f0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #37423

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:35.920: Couldn't delete ns: "e2e-tests-v1job-j4fgg": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-v1job-j4fgg\"") has prevented the request from succeeding (delete namespaces e2e-tests-v1job-j4fgg) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-v1job-j4fgg\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-v1job-j4fgg)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820778550), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
Expected error:
    <*errors.StatusError | 0xc82016c580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-e2e-privilegedpod-04cxp/pods?fieldSelector=metadata.name%3Dhostexec\\\"\") has prevented the request from succeeding (get pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-e2e-privilegedpod-04cxp/pods?fieldSelector=metadata.name%3Dhostexec\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-e2e-privilegedpod-04cxp/pods?fieldSelector=metadata.name%3Dhostexec\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:33.917: Couldn't delete ns: "e2e-tests-configmap-dcyug": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-dcyug\"") has prevented the request from succeeding (delete namespaces e2e-tests-configmap-dcyug) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-configmap-dcyug\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-configmap-dcyug)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820907db0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32949

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:28.316: Couldn't delete ns: "e2e-tests-metrics-grabber-8zkat": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-metrics-grabber-8zkat/limitranges\"") has prevented the request from succeeding (get limitranges) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-metrics-grabber-8zkat/limitranges\\\"\") has prevented the request from succeeding (get limitranges)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820bf76d0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:41.884: Couldn't delete ns: "e2e-tests-kubelet-dvvtj": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubelet-dvvtj\"") has prevented the request from succeeding (delete namespaces e2e-tests-kubelet-dvvtj) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubelet-dvvtj\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-kubelet-dvvtj)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8208bc370), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:185
Expected error:
    <*errors.StatusError | 0xc8200d4e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-job-0qlqu/pods?labelSelector=job%3Dfoo\\\"\") has prevented the request from succeeding (get pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-job-0qlqu/pods?labelSelector=job%3Dfoo\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-job-0qlqu/pods?labelSelector=job%3Dfoo\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:172

Issues about this test specifically: #28003

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820df3080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-dns-es755/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-dns-es755/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-dns-es755/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:29.570: Couldn't delete ns: "e2e-tests-kubectl-em24l": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-em24l/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-em24l/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8202680a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:28.920: Couldn't delete ns: "e2e-tests-kubectl-346lf": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-346lf/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-346lf/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820c74a50), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:62
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc820dc4400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-ybj2m/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-ybj2m/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-ybj2m/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:10:16.983: Couldn't delete ns: "e2e-tests-hostpath-loprz": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-hostpath-loprz/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-hostpath-loprz/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8208c9a40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82057ad00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-v8scg/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-v8scg/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-v8scg/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:23.054: Couldn't delete ns: "e2e-tests-init-container-v6rr4": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-init-container-v6rr4/ingresses\"") has prevented the request from succeeding (get ingresses.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-init-container-v6rr4/ingresses\\\"\") has prevented the request from succeeding (get ingresses.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82080be00), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #31936

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 22 11:09:34.762: Couldn't delete ns: "e2e-tests-kubectl-1ex5g": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-kubectl-1ex5g/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-kubectl-1ex5g/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82027b5e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82057e680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-e2e-kubelet-etc-hosts-xtlz9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-e2e-kubelet-etc-hosts-xtlz9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-e2e-kubelet-etc-hosts-xtlz9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27023 #34604 #38550

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/3242/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc820196b10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #28337

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc8201016a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc8201a6b90>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:595

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc82084ecf0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "nginx" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1151

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc8200e77b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/3583/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820316980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-8xw0p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-8xw0p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-8xw0p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82025ee00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-0qyf4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-0qyf4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-0qyf4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820a94080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-8yy4o/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-8yy4o/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-8yy4o/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Downward API should provide pod IP as an env var {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e66880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-uvfyw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-uvfyw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-uvfyw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8209eaa80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-containers-7fl2x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-containers-7fl2x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-containers-7fl2x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34520

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820833d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-xo3gm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-xo3gm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-xo3gm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e5e200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-var-expansion-f0pwa/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-f0pwa/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-f0pwa/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28503

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e59380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-bi3tl/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-bi3tl/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-bi3tl/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #37274

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820cc6880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-dns-kujtw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-dns-kujtw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-dns-kujtw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8207b7900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-j7q1w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-j7q1w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-j7q1w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] HostPath should support subPath [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820d7e580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-hostpath-gnhet/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-hostpath-gnhet/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-hostpath-gnhet/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35628

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 13 15:45:58.715: Couldn't delete ns: "e2e-tests-kubectl-0fl07": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-0fl07/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-0fl07/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820b546e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e77f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-lrokm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-lrokm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-lrokm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27079

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820d13b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-jk6c8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-jk6c8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-jk6c8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34827

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820da3980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-w39dm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-w39dm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-w39dm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30263

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 13 15:45:39.464: Couldn't delete ns: "e2e-tests-dns-iidyf": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-dns-iidyf/ingresses\"") has prevented the request from succeeding (get ingresses.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-dns-iidyf/ingresses\\\"\") has prevented the request from succeeding (get ingresses.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820876140), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28337

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 13 15:45:25.323: Couldn't delete ns: "e2e-tests-configmap-2svqo": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-2svqo/limitranges\"") has prevented the request from succeeding (get limitranges) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-configmap-2svqo/limitranges\\\"\") has prevented the request from succeeding (get limitranges)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a07cc0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82017f480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-v1job-8k22k/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-8k22k/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-8k22k/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820eea780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-clientset-eoerh/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-clientset-eoerh/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-clientset-eoerh/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:56
Expected error:
    <*errors.errorString | 0xc8208720e0>: {
        s: "deployment test-new-deployment failed to create new RS: an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-nd6zf/deployments/test-new-deployment\\\"\") has prevented the request from succeeding (get deployments.extensions test-new-deployment)",
    }
    deployment test-new-deployment failed to create new RS: an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-nd6zf/deployments/test-new-deployment\"") has prevented the request from succeeding (get deployments.extensions test-new-deployment)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:271

Issues about this test specifically: #35579

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:847
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.17.74 --kubeconfig=/workspace/.kube/config logs redis-master-hw8dr redis-master --namespace=e2e-tests-kubectl-uu0vi] []  <nil>  error: 500 Internal Server Error while accessing https://104.198.17.74/api/v1/namespaces/e2e-tests-kubectl-uu0vi/pods/redis-master-hw8dr/log?container=redis-master: Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-uu0vi/pods/redis-master-hw8dr/log?container=redis-master\"\n [] <nil> 0xc820be3ac0 exit status 1 <nil> true [0xc82055ed70 0xc82055edb8 0xc82055ede0] [0xc82055ed70 0xc82055edb8 0xc82055ede0] [0xc82055eda0 0xc82055edd8] [0xafae20 0xafae20] 0xc820cacde0}:\nCommand stdout:\n\nstderr:\nerror: 500 Internal Server Error while accessing https://104.198.17.74/api/v1/namespaces/e2e-tests-kubectl-uu0vi/pods/redis-master-hw8dr/log?container=redis-master: Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-uu0vi/pods/redis-master-hw8dr/log?container=redis-master\"\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.17.74 --kubeconfig=/workspace/.kube/config logs redis-master-hw8dr redis-master --namespace=e2e-tests-kubectl-uu0vi] []  <nil>  error: 500 Internal Server Error while accessing https://104.198.17.74/api/v1/namespaces/e2e-tests-kubectl-uu0vi/pods/redis-master-hw8dr/log?container=redis-master: Internal Server Error: "/api/v1/namespaces/e2e-tests-kubectl-uu0vi/pods/redis-master-hw8dr/log?container=redis-master"
     [] <nil> 0xc820be3ac0 exit status 1 <nil> true [0xc82055ed70 0xc82055edb8 0xc82055ede0] [0xc82055ed70 0xc82055edb8 0xc82055ede0] [0xc82055eda0 0xc82055edd8] [0xafae20 0xafae20] 0xc820cacde0}:
    Command stdout:
    
    stderr:
    error: 500 Internal Server Error while accessing https://104.198.17.74/api/v1/namespaces/e2e-tests-kubectl-uu0vi/pods/redis-master-hw8dr/log?container=redis-master: Internal Server Error: "/api/v1/namespaces/e2e-tests-kubectl-uu0vi/pods/redis-master-hw8dr/log?container=redis-master"
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820021500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-ky30b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-ky30b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-ky30b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] EmptyDir wrapper volumes should becomes running {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:180
Expected error:
    <*errors.StatusError | 0xc820ee0080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-wrapper-2iaqr/pods?fieldSelector=metadata.name%3Dpod-secrets-61b9cfb7-d9ea-11e6-837a-0242ac110002&resourceVersion=4166\\\"\") has prevented the request from succeeding (get pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-wrapper-2iaqr/pods?fieldSelector=metadata.name%3Dpod-secrets-61b9cfb7-d9ea-11e6-837a-0242ac110002&resourceVersion=4166\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-wrapper-2iaqr/pods?fieldSelector=metadata.name%3Dpod-secrets-61b9cfb7-d9ea-11e6-837a-0242ac110002&resourceVersion=4166\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:179

Issues about this test specifically: #28450

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 13 15:45:52.128: Couldn't delete ns: "e2e-tests-configmap-18yop": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-configmap-18yop/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-configmap-18yop/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820ba39f0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29052

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 13 15:45:30.756: Couldn't delete ns: "e2e-tests-services-qav46": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-services-qav46/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-services-qav46/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820c58e60), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8214a7280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-port-forwarding-okkff/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-okkff/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-okkff/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 13 15:45:03.623: Couldn't delete ns: "e2e-tests-kubectl-yk3ii": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-yk3ii/networkpolicies\"") has prevented the request from succeeding (get networkpolicies.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-yk3ii/networkpolicies\\\"\") has prevented the request from succeeding (get networkpolicies.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8209facd0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8215ea400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-3n04u/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-3n04u/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-3n04u/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35473

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.StatusError | 0xc820754c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-vchml/replicationcontrollers/rc-light\\\"\") has prevented the request from succeeding (get replicationControllers rc-light)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light",
                Group: "",
                Kind: "replicationControllers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-vchml/replicationcontrollers/rc-light\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-vchml/replicationcontrollers/rc-light\"") has prevented the request from succeeding (get replicationControllers rc-light)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:249

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8217a3280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-init-container-4qkiy/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-4qkiy/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-4qkiy/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31873

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820da4180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-3toa4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-3toa4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-3toa4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82100da80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-prestop-l3tr0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-prestop-l3tr0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-prestop-l3tr0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 13 15:45:34.676: Couldn't delete ns: "e2e-tests-downward-api-7uptj": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-7uptj/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-7uptj/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820ac8550), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #31836

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8207b6800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-v64ql/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-v64ql/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-v64ql/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Downward API volume should provide container's memory limit {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e5a280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-5bk9z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-5bk9z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-5bk9z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #38500

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 13 15:45:34.324: Couldn't delete ns: "e2e-tests-cadvisor-8g1d6": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-cadvisor-8g1d6/jobs\"") has prevented the request from succeeding (get jobs.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-cadvisor-8g1d6/jobs\\\"\") has prevented the request from succeeding (get jobs.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820c2d220), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32371

Failed: [k8s.io] Downward API volume should provide container's cpu limit {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82090f300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-uyamg/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-uyamg/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-uyamg/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 13 15:46:22.953: Couldn't delete ns: "e2e-tests-container-probe-oo8uo": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-container-probe-oo8uo/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-container-probe-oo8uo/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820cf19f0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28084

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820256f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-mxwva/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-mxwva/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-mxwva/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 13 15:45:18.508: Couldn't delete ns: "e2e-tests-downward-api-jnf17": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-jnf17/jobs\"") has prevented the request from succeeding (get jobs.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-jnf17/jobs\\\"\") has prevented the request from succeeding (get jobs.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82016b180), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #35590

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820ca4700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-ssh-owifk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-ssh-owifk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-ssh-owifk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8209d2100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-8b24

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/4103/
Multiple broken tests:

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:462
Jan 22 15:16:06.530: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2180

Issues about this test specifically: #28064 #28569 #34036

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc820cb57a0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc820d30ed0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:393

Issues about this test specifically: #27196 #28998 #32403 #33341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/4156/
Multiple broken tests:

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8207ce480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-v1job-834cb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-834cb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-834cb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 23 13:26:49.177: Couldn't delete ns: "e2e-tests-proxy-j5h72": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-proxy-j5h72/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-proxy-j5h72/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a80140), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #35422

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820625200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-clientset-wlqh7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-clientset-wlqh7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-clientset-wlqh7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820c43600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-52b9d/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-52b9d/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-52b9d/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35790

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:48
Expected error:
    <*errors.errorString | 0xc820c5ce10>: {
        s: "failed to get logs from downwardapi-volume-a3b45330-e1b2-11e6-9e2c-0242ac110009 for client-container: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-jst0g/pods/downwardapi-volume-a3b45330-e1b2-11e6-9e2c-0242ac110009/log?container=client-container&previous=false\\\"\") has prevented the request from succeeding (get pods downwardapi-volume-a3b45330-e1b2-11e6-9e2c-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-a3b45330-e1b2-11e6-9e2c-0242ac110009 for client-container: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-jst0g/pods/downwardapi-volume-a3b45330-e1b2-11e6-9e2c-0242ac110009/log?container=client-container&previous=false\"") has prevented the request from succeeding (get pods downwardapi-volume-a3b45330-e1b2-11e6-9e2c-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #31836

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820cfcb80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-scheduledjob-ufrtq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-ufrtq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-ufrtq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 23 13:26:49.030: Couldn't delete ns: "e2e-tests-services-x4hfm": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-services-x4hfm/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-services-x4hfm/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a5bf90), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e8c080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-gr933/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-gr933/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-gr933/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35590

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820612c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-init-container-1e27t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-1e27t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-1e27t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820f6aa80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-scheduledjob-pkxcq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-pkxcq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-pkxcq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e8c400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-9t8fn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-9t8fn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-9t8fn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #37071

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821349280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-ilw2t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-ilw2t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-ilw2t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35793

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 23 13:26:59.196: Couldn't delete ns: "e2e-tests-downward-api-qq5hz": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-qq5hz\"") has prevented the request from succeeding (delete namespaces e2e-tests-downward-api-qq5hz) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-qq5hz\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-downward-api-qq5hz)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8207bc1e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #37423

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e74600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-init-container-21qek/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-21qek/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-21qek/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31873

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8200d4880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-limitrange-g1xsk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-limitrange-g1xsk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-limitrange-g1xsk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27503

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821280600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-v1job-l7s87/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-l7s87/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-l7s87/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {SelfLink: "", ResourceVersion: ""},
                Status: "Failure",
                Message: "an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-9483r/replicasets/nginx-1157212501\\\"\") has prevented the request from succeeding (get replicasets.extensions nginx-1157212501)",
                Reason: "InternalError",
                Details: {
                    Name: "nginx-1157212501",
                    Group: "extensions",
                    Kind: "replicasets",
                    Causes: [
                        {
                            Type: "UnexpectedServerResponse",
                            Message: "Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-9483r/replicasets/nginx-1157212501\"",
                            Field: "",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 500,
            },
        },
    ]
    an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-9483r/replicasets/nginx-1157212501\"") has prevented the request from succeeding (get replicasets.extensions nginx-1157212501)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:202

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820da3000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-atkb7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-atkb7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-atkb7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27232

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820ed0180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-hostpath-ni88q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-hostpath-ni88q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-hostpath-ni88q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82087d800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-svc-latency-y6ikd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-svc-latency-y6ikd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-svc-latency-y6ikd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30632

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820a4d600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-19ij4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-19ij4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-19ij4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26838 #36165

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 23 13:27:10.470: Couldn't delete ns: "e2e-tests-v1job-7j117": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-v1job-7j117\"") has prevented the request from succeeding (delete namespaces e2e-tests-v1job-7j117) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-v1job-7j117\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-v1job-7j117)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820855270), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] Secrets should be consumable from pods in volume with Mode set in the item [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e74180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-secrets-lruzm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-lruzm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-lruzm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31969

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820c03b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubelet-34q5t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubelet-34q5t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubelet-34q5t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821256580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-n6k40/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-n6k40/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-n6k40/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:175
getting pod  in namespace e2e-tests-container-probe-icrb2
Expected error:
    <*errors.StatusError | 0xc820ea3c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-container-probe-icrb2/pods/liveness-http\\\"\") has prevented the request from succeeding (get pods liveness-http)",
            Reason: "InternalError",
            Details: {
                Name: "liveness-http",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-container-probe-icrb2/pods/liveness-http\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-container-probe-icrb2/pods/liveness-http\"") has prevented the request from succeeding (get pods liveness-http)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:340

Issues about this test specifically: #38511

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820ea9d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-replication-controller-29czp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-29czp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-29czp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820ef6700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-job-rvovk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-rvovk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-rvovk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820c96180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-2dc2g/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-2dc2g/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-2dc2g/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820f24200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-vv6e4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-vv6e4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-vv6e4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34658

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 23 13:26:50.244: Couldn't delete ns: "e2e-tests-resourcequota-lfso0": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-lfso0\"") has prevented the request from succeeding (delete namespaces e2e-tests-resourcequota-lfso0) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-lfso0\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-resourcequota-lfso0)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8209acd20), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #34212

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8209aac00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-job-uw3rk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-uw3rk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-uw3rk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28003

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8208fee80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-port-forwarding-r6nwf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-r6nwf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-r6nwf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26955

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8209e4400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-n59zt/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-n59zt/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-n59zt/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #37500

Failed: [k8s.io] Downward API volume should provide container's cpu request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 23 13:26:52.911: Couldn't delete ns: "e2e-tests-downward-api-jhrqi": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-downward-api-jhrqi/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-downward-api-jhrqi/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8207fe5a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8204cd680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-2qwz9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-2qwz9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-2qwz9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8207ce080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-job-ow14y/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-ow14y/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-ow14y/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31938

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820b1d580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-3bfja/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-3bfja/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-3bfja/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82128cb80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-replicaset-6ngk5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replicaset-6ngk5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replicaset-6ngk5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32023

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820d91f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-fr1re/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-fr1re/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-fr1re/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32949

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821520f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-zqg5r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: 

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/4207/
Multiple broken tests:

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:185
Expected error:
    <*url.Error | 0xc820b88000>: {
        Op: "Get",
        URL: "https://104.154.162.124/api/v1/namespaces/e2e-tests-job-2t6gl/pods?labelSelector=job%3Dfoo",
        Err: {
            err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
            timeout: true,
        },
    }
    Get https://104.154.162.124/api/v1/namespaces/e2e-tests-job-2t6gl/pods?labelSelector=job%3Dfoo: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:172

Issues about this test specifically: #28003

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:757
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:164
Expected
    <*errors.errorString | 0xc8200ef7b0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:149

Issues about this test specifically: #31873

Failed: [k8s.io] Downward API volume should provide container's memory request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:184
Expected error:
    <*errors.errorString | 0xc820b7c4e0>: {
        s: "expected container client-container success: gave up waiting for pod 'downwardapi-volume-115e6881-e26d-11e6-b31e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected container client-container success: gave up waiting for pod 'downwardapi-volume-115e6881-e26d-11e6-b31e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #29707

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:175
starting pod liveness-http in namespace e2e-tests-container-probe-lcglp
Expected error:
    <*errors.errorString | 0xc8200e77b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:334

Issues about this test specifically: #38511

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc820b22070>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1084

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc82083e370>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Downward API volume should provide container's cpu request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:175
Expected error:
    <*errors.errorString | 0xc82093cb50>: {
        s: "expected container client-container success: gave up waiting for pod 'downwardapi-volume-11344021-e26d-11e6-971a-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected container client-container success: gave up waiting for pod 'downwardapi-volume-11344021-e26d-11e6-971a-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:96
Expected error:
    <*url.Error | 0xc8208480c0>: {
        Op: "Get",
        URL: "https://104.154.162.124/apis/extensions/v1beta1/namespaces/e2e-tests-job-e9q52/jobs/rand-non-local",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\x9a\xa2|",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.154.162.124/apis/extensions/v1beta1/namespaces/e2e-tests-job-e9q52/jobs/rand-non-local: dial tcp 104.154.162.124:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:95

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1019
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.162.124 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-z69vo] []  <nil> Created e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367\nScaling up e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367 up to 1\n The connection to the server 104.154.162.124 was refused - did you specify the right host or port?\n [] <nil> 0xc820d02960 exit status 1 <nil> true [0xc8200e0048 0xc8200e0068 0xc8200e0088] [0xc8200e0048 0xc8200e0068 0xc8200e0088] [0xc8200e0060 0xc8200e0080] [0xafae20 0xafae20] 0xc8204631a0}:\nCommand stdout:\nCreated e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367\nScaling up e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367 up to 1\n\nstderr:\nThe connection to the server 104.154.162.124 was refused - did you specify the right host or port?\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.162.124 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-z69vo] []  <nil> Created e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367
    Scaling up e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367 up to 1
     The connection to the server 104.154.162.124 was refused - did you specify the right host or port?
     [] <nil> 0xc820d02960 exit status 1 <nil> true [0xc8200e0048 0xc8200e0068 0xc8200e0088] [0xc8200e0048 0xc8200e0068 0xc8200e0088] [0xc8200e0060 0xc8200e0080] [0xafae20 0xafae20] 0xc8204631a0}:
    Command stdout:
    Created e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367
    Scaling up e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-920078a1b5bbc51634709e91ec983367 up to 1
    
    stderr:
    The connection to the server 104.154.162.124 was refused - did you specify the right host or port?
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:167

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
Expected error:
    <*errors.errorString | 0xc8200fd6a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #29519 #32451

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1071
Expected error:
    <*errors.errorString | 0xc8201a2760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1393

Issues about this test specifically: #26172

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:63
Expected error:
    <*url.Error | 0xc820bc4000>: {
        Op: "Get",
        URL: "https://104.154.162.124/apis/batch/v1/namespaces/e2e-tests-v1job-py9by/jobs/all-succeed",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\x9a\xa2|",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.154.162.124/apis/batch/v1/namespaces/e2e-tests-v1job-py9by/jobs/all-succeed: dial tcp 104.154.162.124:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:62

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105
Expected error:
    <*errors.errorString | 0xc82083c4b0>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-10a4da2e-e26d-11e6-bcd9-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-10a4da2e-e26d-11e6-bcd9-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #26780

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:875
Expected error:
    <*url.Error | 0xc820cd9020>: {
        Op: "Get",
        URL: "https://104.154.162.124/api/v1/namespaces/e2e-tests-kubectl-1u7ka/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\x9a\xa2|",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.154.162.124/api/v1/namespaces/e2e-tests-kubectl-1u7ka/events: dial tcp 104.154.162.124:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2856

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:78
Expected error:
    <*url.Error | 0xc820c62090>: {
        Op: "Get",
        URL: "https://104.154.162.124/apis/extensions/v1beta1/namespaces/e2e-tests-job-6y3f7/jobs/fail-once-local",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\x9a\xa2|",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.154.162.124/apis/extensions/v1beta1/namespaces/e2e-tests-job-6y3f7/jobs/fail-once-local: dial tcp 104.154.162.124:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:77

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
Expected error:
    <*errors.errorString | 0xc8200e57b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:108

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            FailureType: 0,
            ResourceVersion: "Unknown",
            ActualError: {
                Op: "Get",
                URL: "https://104.154.162.124/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-0zfn8/replicasets/first-deployment-1496163690",
                Err: {
                    Op: "dial",
                    Net: "tcp",
                    Source: nil,
                    Addr: {
                        IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\x9a\xa2|",
                        Port: 443,
                        Zone: "",
                    },
                    Err: {
                        Syscall: "getsockopt",
                        Err: 0x6f,
                    },
                },
            },
        },
    ]
    Scaling the resource failed with: Get https://104.154.162.124/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-0zfn8/replicasets/first-deployment-1496163690: dial tcp 104.154.162.124:443: getsockopt: connection refused; Current resource version Unknown
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:202

Issues about this test specifically: #31502 #32947 #38646

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:286
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:285

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:745
Expected error:
    <*url.Error | 0xc8209500f0>: {
        Op: "Get",
        URL: "https://104.154.162.124/api/v1/namespaces/e2e-tests-kubectl-06epp/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\x9a\xa2|",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.154.162.124/api/v1/namespaces/e2e-tests-kubectl-06epp/events: dial tcp 104.154.162.124:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2856

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:80
Expected error:
    <*errors.errorString | 0xc820b22160>: {
        s: "error waiting for deployment test-rollback-no-revision-deployment rollbackTo to be cleared: Get https://104.154.162.124/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-6ei8a/deployments/test-rollback-no-revision-deployment: dial tcp 104.154.162.124:443: getsockopt: connection refused",
    }
    error waiting for deployment test-rollback-no-revision-deployment rollbackTo to be cleared: Get https://104.154.162.124/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-6ei8a/deployments/test-rollback-no-revision-deployment: dial tcp 104.154.162.124:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:895

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*errors.errorString | 0xc8200e77b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:120
getting pod 
Expected error:
    <*url.Error | 0xc820c64000>: {
        Op: "Get",
        URL: "https://104.154.162.124/api/v1/namespaces/e2e-tests-container-probe-pcqp8/pods/liveness-exec",
        Err: {
            err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
            timeout: true,
        },
    }
    Get https://104.154.162.124/api/v1/namespaces/e2e-tests-container-probe-pcqp8/pods/liveness-exec: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:350

Issues about this test specifically: #30264

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc82017ab40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

@grodrigues3 grodrigues3 added sig/network Categorizes an issue or PR as relevant to SIG Network. and removed sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Mar 11, 2017
@ethernetdan
Copy link
Contributor

Seems ephemeral

@ethernetdan ethernetdan modified the milestones: v1.7, v1.6 Mar 14, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/7991/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124
Expected error:
    <*errors.errorString | 0xc420414e10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Mar 17 21:23:56.541: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420376ea0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203f6fd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc420326d50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/8431/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203fd100>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203fcf80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450 #43094

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420aeac80>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27196 #28998 #32403 #33341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/8664/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42042aec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:64
Expected error:
    <*errors.errorString | 0xc4206072c0>: {
        s: "expected pod \"client-containers-75d35305-16b1-11e7-9743-0242ac110006\" success: gave up waiting for pod 'client-containers-75d35305-16b1-11e7-9743-0242ac110006' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-75d35305-16b1-11e7-9743-0242ac110006" success: gave up waiting for pod 'client-containers-75d35305-16b1-11e7-9743-0242ac110006' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #29467

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203a8880>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc420df4830>: {
        s: "expected pod \"pod-secrets-760c531c-16b1-11e7-966b-0242ac110006\" success: gave up waiting for pod 'pod-secrets-760c531c-16b1-11e7-966b-0242ac110006' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-760c531c-16b1-11e7-966b-0242ac110006" success: gave up waiting for pod 'pod-secrets-760c531c-16b1-11e7-966b-0242ac110006' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #29221

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/8781/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc42121cd00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203aa810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1303
Apr  3 07:39:20.833: Failed to start proxy server: Failed to read from kubectl proxy stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1292

Issues about this test specifically: #27195

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/8843/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:497
failed: finding the contents of the mounted file.
Expected error:
    <*errors.errorString | 0xc421021560>: {
        s: "Failed to find \"Hello from GlusterFS!\", last result: \"\"",
    }
    Failed to find "Hello from GlusterFS!", last result: ""
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:255

Issues about this test specifically: #37056

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Timed out after 120.002s.
Expected
    <string>: content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-04T20:35:52.243543357Z"
    kubernetes.io/config.source="api"
    
to contain substring
    <string>: builder="foo"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:152

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203aceb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/8860/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420a6ace0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:395

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc420406e90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc4208dac90>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
wait for pod "pod-a5354c7e-19bd-11e7-aa78-0242ac110004" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203fbca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #31400

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc420451920>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420363d70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9095/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.75.60 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-3sf3h] []  0xc420bc1860 Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc420a8ba10 exit status 1 <nil> <nil> true [0xc420a920a0 0xc420a920c8 0xc420a920d8] [0xc420a920a0 0xc420a920c8 0xc420a920d8] [0xc420a920a8 0xc420a920c0 0xc420a920d0] [0x9746f0 0x9747f0 0x9747f0] 0xc4206e8f60 <nil>}:\nCommand stdout:\nCreated update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.75.60 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-3sf3h] []  0xc420bc1860 Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc420a8ba10 exit status 1 <nil> <nil> true [0xc420a920a0 0xc420a920c8 0xc420a920d8] [0xc420a920a0 0xc420a920c8 0xc420a920d8] [0xc420a920a8 0xc420a920c0 0xc420a920d0] [0x9746f0 0x9747f0 0x9747f0] 0xc4206e8f60 <nil>}:
    Command stdout:
    Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203e36d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 14:22:19.396: Couldn't delete ns: "e2e-tests-kubectl-qvvpq": namespace e2e-tests-kubectl-qvvpq was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-kubectl-qvvpq was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc4203fa300>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9097/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42039ec80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:412
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.75.60 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-9xxlg -i nginx cat] []  0xc4206db3a0  Unable to connect to the server: dial tcp 104.198.75.60:443: i/o timeout\n [] <nil> 0xc420f0e270 exit status 1 <nil> <nil> true [0xc420b62080 0xc420b620a8 0xc420b620b8] [0xc420b62080 0xc420b620a8 0xc420b620b8] [0xc420b62088 0xc420b620a0 0xc420b620b0] [0x9746f0 0x9747f0 0x9747f0] 0xc420762a80 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.198.75.60:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.75.60 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-9xxlg -i nginx cat] []  0xc4206db3a0  Unable to connect to the server: dial tcp 104.198.75.60:443: i/o timeout
     [] <nil> 0xc420f0e270 exit status 1 <nil> <nil> true [0xc420b62080 0xc420b620a8 0xc420b620b8] [0xc420b62080 0xc420b620a8 0xc420b620b8] [0xc420b62088 0xc420b620a0 0xc420b620b0] [0x9746f0 0x9747f0 0x9747f0] 0xc420762a80 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 104.198.75.60:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.75.60 --kubeconfig=/workspace/.kube/config get pods update-demo-nautilus-znxk9 -o template --template={{if (exists . \"status\" \"containerStatuses\")}}{{range .status.containerStatuses}}{{if (and (eq .name \"update-demo\") (exists . \"state\" \"running\"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dm06m] []  <nil>  Unable to connect to the server: dial tcp 104.198.75.60:443: i/o timeout\n [] <nil> 0xc42066ef90 exit status 1 <nil> <nil> true [0xc4203b2560 0xc4203b2578 0xc4203b2590] [0xc4203b2560 0xc4203b2578 0xc4203b2590] [0xc4203b2570 0xc4203b2588] [0x9747f0 0x9747f0] 0xc420ed7c20 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.198.75.60:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.75.60 --kubeconfig=/workspace/.kube/config get pods update-demo-nautilus-znxk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dm06m] []  <nil>  Unable to connect to the server: dial tcp 104.198.75.60:443: i/o timeout
     [] <nil> 0xc42066ef90 exit status 1 <nil> <nil> true [0xc4203b2560 0xc4203b2578 0xc4203b2590] [0xc4203b2560 0xc4203b2578 0xc4203b2590] [0xc4203b2570 0xc4203b2588] [0x9747f0 0x9747f0] 0xc420ed7c20 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 104.198.75.60:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9227/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc4203ab850>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203ab850>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Apr 12 07:53:33.078: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9267/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203f4f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Apr 13 04:37:04.610: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:629
Apr 13 04:29:21.532: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:626

Issues about this test specifically: #28420 #36122

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420462e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc420af9b40>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-13 04:31:51 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-13 04:32:23 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-13 04:31:51 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.72.1.107 StartTime:2017-04-13 04:31:51 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420a49340} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://fd2884e9280786708944150b742c826ed7ac3b95c023e3221528948a03cd3463}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-13 04:31:51 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-13 04:32:23 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-13 04:31:51 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.72.1.107 StartTime:2017-04-13 04:31:51 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420a49340} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://fd2884e9280786708944150b742c826ed7ac3b95c023e3221528948a03cd3463}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Apr 13 04:41:03.287: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1582

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42036ae90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc420453800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Apr 13 04:49:38.806: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203bf200>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9313/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc42038d710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc420450b80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42042b1f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc420450b80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42039e610>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Apr 14 03:25:58.505: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Apr 14 03:12:52.351: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Apr 14 03:12:57.227: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1582

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc421331a60>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-14 03:13:49 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-14 03:14:20 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-14 03:13:49 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.72.1.136 StartTime:2017-04-14 03:13:49 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4210680e0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://a89a872fbfb07f22bf7a6cdd4342c257779be78456fc5b3eda42ceeba4ceac52}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-14 03:13:49 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-14 03:14:20 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-14 03:13:49 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.72.1.136 StartTime:2017-04-14 03:13:49 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4210680e0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://a89a872fbfb07f22bf7a6cdd4342c257779be78456fc5b3eda42ceeba4ceac52}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:629
Apr 14 03:10:30.589: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:626

Issues about this test specifically: #28420 #36122

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9341/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:240
Expected success, but got an error:
    <*errors.errorString | 0xc42044e3e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:232

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:42
wait for pod "pod-configmaps-01385c0c-216f-11e7-879c-0242ac110003" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203ad3b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #34827

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203fa150>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420b366b0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:395

Issues about this test specifically: #27196 #28998 #32403 #33341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9506/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc4215d9f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.72.1.169:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.72.1.169:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.72.1.169:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:212

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203fceb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203aced0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc4205050a0>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-17 23:15:49 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-17 23:16:21 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-17 23:15:49 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.72.1.138 StartTime:2017-04-17 23:15:49 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420e23ab0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://4d2c0875b3dfee4441a40c603a07401ab32ac01e62aae80b0dab695c64ac752d}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-17 23:15:49 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-17 23:16:21 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-17 23:15:49 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.72.1.138 StartTime:2017-04-17 23:15:49 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420e23ab0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://4d2c0875b3dfee4441a40c603a07401ab32ac01e62aae80b0dab695c64ac752d}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:629
Apr 17 23:14:48.697: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:626

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Apr 17 23:31:26.873: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc4203acd60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc42043d410>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Apr 17 23:28:14.196: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1582

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203fa210>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Apr 17 23:17:44.217: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172 #40644

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9702/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 11:42:52.885: Couldn't delete ns: "e2e-tests-container-probe-w5mcb": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-container-probe-w5mcb/statefulsets\"") has prevented the request from succeeding (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-container-probe-w5mcb/statefulsets\\\"\") has prevented the request from succeeding (get statefulsets.apps)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc4208e87d0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29521

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421040580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-7v13b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-7v13b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-7v13b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc420db8800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-hjjn8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-hjjn8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-hjjn8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #35473

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 11:42:35.364: Couldn't delete ns: "e2e-tests-ssh-c821z": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-ssh-c821z/limitranges\"") has prevented the request from succeeding (get limitranges) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-ssh-c821z/limitranges\\\"\") has prevented the request from succeeding (get limitranges)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc4208c0be0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #26129 #32341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9705/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Apr 21 13:01:45.055: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.1.69:8080/dial?request=hostName&protocol=udp&host=10.72.0.41&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203a2350>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc420d441d0>: {
        s: "service verification failed for: 10.75.248.254\nexpected [service1-0l4t0 service1-1rr0b service1-cc85t]\nreceived [service1-cc85t wget: download timed out]",
    }
    service verification failed for: 10.75.248.254
    expected [service1-0l4t0 service1-1rr0b service1-cc85t]
    received [service1-cc85t wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Apr 21 13:01:54.717: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.0.45:8080/dial?request=hostName&protocol=http&host=10.72.1.58&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.86.168 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-frnp1 execpod-sourceip-gke-bootstrap-e2e-default-pool-957ac51a-c6ln80 -- /bin/sh -c wget -T 30 -qO- 10.75.252.101:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc421362720 exit status 1 <nil> <nil> true [0xc420e92000 0xc420e92018 0xc420e92030] [0xc420e92000 0xc420e92018 0xc420e92030] [0xc420e92010 0xc420e92028] [0x9747f0 0x9747f0] 0xc4212f4300 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.86.168 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-frnp1 execpod-sourceip-gke-bootstrap-e2e-default-pool-957ac51a-c6ln80 -- /bin/sh -c wget -T 30 -qO- 10.75.252.101:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc421362720 exit status 1 <nil> <nil> true [0xc420e92000 0xc420e92018 0xc420e92030] [0xc420e92000 0xc420e92018 0xc420e92030] [0xc420e92010 0xc420e92028] [0x9747f0 0x9747f0] 0xc4212f4300 <nil>}:
    Command stdout:
    
    stderr:
    wget: download timed out
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc420427040>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:272
0 (0; 2m7.378201591s): path /api/v1/namespaces/e2e-tests-proxy-wknxh/pods/https:proxy-service-932cs-mp4cv:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.72.0.10:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.72.0.10:443/' }],RetryAfterSeconds:0,} Code:503}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:270

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420427040>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Apr 21 13:05:11.894: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
validating pre-stop.
Expected error:
    <*errors.errorString | 0xc42043ba40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:159

Issues about this test specifically: #30287 #35953

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9707/
Multiple broken tests:

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Failed creating the first deployment
Expected error:
    <*errors.StatusError | 0xc420721280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-3v3kh/deployments\\\"\") has prevented the request from succeeding (post deployments.extensions)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "extensions",
                Kind: "deployments",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-3v3kh/deployments\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-3v3kh/deployments\"") has prevented the request from succeeding (post deployments.extensions)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1206

Issues about this test specifically: #31502 #32947 #38646

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:34.501: Couldn't delete ns: "e2e-tests-emptydir-0tsd1": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-0tsd1\"") has prevented the request from succeeding (delete namespaces e2e-tests-emptydir-0tsd1) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-0tsd1\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-emptydir-0tsd1)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420921ea0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc420f80600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-container-probe-27lz2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-27lz2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-27lz2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #38511

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Expected error:
    <*errors.StatusError | 0xc420fa0780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get pods)",
            Reason: "Unauthorized",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #38254

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:564
Apr 21 13:56:53.157: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:18.505: Couldn't delete ns: "e2e-tests-proxy-jghtj": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-proxy-jghtj/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-proxy-jghtj/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420d89720), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #37435

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420428ea0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:94
Failed after 17.090s.
pod should not be ready
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*errors.StatusError>: &errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server has asked for the client to provide credentials (get pods test-webserver-061e26fa-26d5-11e7-a0c4-0242ac110002)", Reason:"Unauthorized", Details:(*unversioned.StatusDetails)(0xc420ae3a40), Code:401}}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:84

Issues about this test specifically: #28084

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1476
Apr 21 13:56:48.303: Failed getting quota scopes: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-8g3kn/resourcequotas/scopes\"") has prevented the request from succeeding (get resourcequotas scopes)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1460

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:12.677: Couldn't delete ns: "e2e-tests-emptydir-r301r": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-emptydir-r301r/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-emptydir-r301r/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc4213e85a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:188
failed to GET scheduled pod
Expected error:
    <*errors.StatusError | 0xc421445780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-gg3jh/pods/pod-submit-remove-fe33a581-26d4-11e7-8511-0242ac110002\\\"\") has prevented the request from succeeding (get pods pod-submit-remove-fe33a581-26d4-11e7-8511-0242ac110002)",
            Reason: "InternalError",
            Details: {
                Name: "pod-submit-remove-fe33a581-26d4-11e7-8511-0242ac110002",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-gg3jh/pods/pod-submit-remove-fe33a581-26d4-11e7-8511-0242ac110002\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-gg3jh/pods/pod-submit-remove-fe33a581-26d4-11e7-8511-0242ac110002\"") has prevented the request from succeeding (get pods pod-submit-remove-fe33a581-26d4-11e7-8511-0242ac110002)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:104

Issues about this test specifically: #36564

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:42.980: Couldn't delete ns: "e2e-tests-pods-vlznl": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-pods-vlznl/statefulsets\"") has prevented the request from succeeding (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-pods-vlznl/statefulsets\\\"\") has prevented the request from succeeding (get statefulsets.apps)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420e03c20), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #38308

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Expected error:
    <*errors.StatusError | 0xc420b3d200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-statefulset-1bxl0/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar\\\"\") has prevented the request from succeeding (get pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-statefulset-1bxl0/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-statefulset-1bxl0/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #38083

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:15.958: Couldn't delete ns: "e2e-tests-dns-3klfn": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-dns-3klfn/statefulsets\"") has prevented the request from succeeding (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-dns-3klfn/statefulsets\\\"\") has prevented the request from succeeding (get statefulsets.apps)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc4209760f0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #26168 #27450 #43094

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:29.740: Couldn't delete ns: "e2e-tests-deployment-nggdj": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-nggdj/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-nggdj/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420f00d70), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc420bbf780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-secrets-jswd9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-jswd9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-jswd9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: DiffResources {e2e.go}

Error: 3 leaked resources
[ disks ]
+NAME                                                             ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-3ae3-pvc-dc209ee2-26d4-11e7-9e4c-42010af00008  us-central1-f  1        pd-standard  READY
+gke-bootstrap-e2e-3ae3-pvc-dc24a0c9-26d4-11e7-9e4c-42010af00008  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.StatusError | 0xc4212b2c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-4mhh7/pods/annotationupdatef605541b-26d4-11e7-a2d4-0242ac110002\\\"\") has prevented the request from succeeding (get pods annotationupdatef605541b-26d4-11e7-a2d4-0242ac110002)",
            Reason: "InternalError",
            Details: {
                Name: "annotationupdatef605541b-26d4-11e7-a2d4-0242ac110002",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-4mhh7/pods/annotationupdatef605541b-26d4-11e7-a2d4-0242ac110002\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-4mhh7/pods/annotationupdatef605541b-26d4-11e7-a2d4-0242ac110002\"") has prevented the request from succeeding (get pods annotationupdatef605541b-26d4-11e7-a2d4-0242ac110002)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:70

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:19.901: Couldn't delete ns: "e2e-tests-disruption-q7hd8": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-disruption-q7hd8/statefulsets\"") has prevented the request from succeeding (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-disruption-q7hd8/statefulsets\\\"\") has prevented the request from succeeding (get statefulsets.apps)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420cbcdc0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32646

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42160ec00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-k3skb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-k3skb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-k3skb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:47.010: Couldn't delete ns: "e2e-tests-statefulset-ztkfr": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-statefulset-ztkfr/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-statefulset-ztkfr/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420703b80), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:39.003: Couldn't delete ns: "e2e-tests-resourcequota-qr24s": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-resourcequota-qr24s/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-resourcequota-qr24s/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc42073ee10), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Apr 21 13:56:59.149: Failed to delete pod "pod-host-path-test": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-hostpath-6h3bt/pods/pod-host-path-test\"") has prevented the request from succeeding (delete pods pod-host-path-test)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:118

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:29.175: Couldn't delete ns: "e2e-tests-kubectl-r8zxb": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-kubectl-r8zxb/statefulsets\"") has prevented the request from succeeding (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-kubectl-r8zxb/statefulsets\\\"\") has prevented the request from succeeding (get statefulsets.apps)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420adebe0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:53.068: Couldn't delete ns: "e2e-tests-events-6g79z": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-events-6g79z\"") has prevented the request from succeeding (delete namespaces e2e-tests-events-6g79z) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-events-6g79z\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-events-6g79z)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420ea0280), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #28346

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:32.028: Couldn't delete ns: "e2e-tests-deployment-3gvxd": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-deployment-3gvxd/statefulsets\"") has prevented the request from succeeding (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-deployment-3gvxd/statefulsets\\\"\") has prevented the request from succeeding (get statefulsets.apps)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420e2b630), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 21 13:56:12.375: Couldn't delete ns: "e2e-tests-clientset-c4x6s": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-clientset-c4x6s/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-clientset-c4x6s/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420d51630), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #31183 #36182

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/9977/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Apr 26 16:29:49.946: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1582

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Apr 26 16:35:02.661: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc420450220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc420c5e380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.72.1.25:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.72.1.25:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.72.1.25:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:212

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:629
Apr 26 16:36:35.319: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:626

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc42095ff00>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-26 16:17:49 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-26 16:18:20 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-26 16:17:49 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.72.1.37 StartTime:2017-04-26 16:17:49 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc42014bdc0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://42de30a344ae1675e172848aed08fc4b7951845835cc9953db09e73a10931d56}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-26 16:17:49 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-26 16:18:20 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-26 16:17:49 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.72.1.37 StartTime:2017-04-26 16:17:49 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc42014bdc0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://42de30a344ae1675e172848aed08fc4b7951845835cc9953db09e73a10931d56}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Apr 26 16:25:16.838: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc420413490>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420414d50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42039f9e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450 #43094

@caseydavenport
Copy link
Member

/assign

@caseydavenport
Copy link
Member

/close

Closing due to inactivity - no issues seen since v1.6.

@caseydavenport
Copy link
Member

/reopen

This Issue hasn't been active in 52 days.

Oops! Thought this meant there were no failures for 52 days.

@k8s-ci-robot k8s-ci-robot reopened this May 18, 2017
@caseydavenport
Copy link
Member

A number of different errors here which don't seem to be occurring any more.

Closing, will see if anything pops up.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

7 participants