Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-staging: broken test run #38077

Closed
k8s-github-robot opened this issue Dec 5, 2016 · 23 comments
Closed

ci-kubernetes-e2e-gci-gke-staging: broken test run #38077

k8s-github-robot opened this issue Dec 5, 2016 · 23 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/67/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 5, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/69/

Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820d3bf80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-808lw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-808lw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-808lw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #36183

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8215cd000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-0x8et/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-0x8et/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-0x8et/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820f11c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-flaiu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-flaiu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-flaiu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821870380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-q8rh3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-q8rh3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-q8rh3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34064

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8218e9280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-l9sg5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-l9sg5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-l9sg5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35579

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82059a280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-containers-8r052/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-containers-8r052/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-containers-8r052/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #36706

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e2fe00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubernetes-dashboard-v1zco/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubernetes-dashboard-v1zco/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubernetes-dashboard-v1zco/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26191

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821064080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sched-pred-rf5pr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-rf5pr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-rf5pr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820f11800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-r7a59/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-r7a59/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-r7a59/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29513

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821485e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-c2z6f/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-c2z6f/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-c2z6f/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #37274

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820c4e500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-proxy-dpw3q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-dpw3q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-dpw3q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8202f4380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-j7dzd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-j7dzd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-j7dzd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29831

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820d8e500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-ec3cc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-ec3cc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-ec3cc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35590

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82140c100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-daemonrestart-ajs6m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-daemonrestart-ajs6m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-daemonrestart-ajs6m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31407

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8205c0400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-replication-controller-kkh6e/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-kkh6e/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-kkh6e/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32087

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820ff7e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sched-pred-7cb26/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-7cb26/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-7cb26/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82059a580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-o8btn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-o8btn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-o8btn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #36649

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82140d100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-7zj49/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-7zj49/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-7zj49/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Mesos starts static pods on every node in the mesos cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820cb2180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-qc8ky/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-qc8ky/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-qc8ky/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820947680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-resize-nodes-9rs61/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resize-nodes-9rs61/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resize-nodes-9rs61/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82153dc80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-ux9ij/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-ux9ij/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-ux9ij/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31836

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820ba3480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-proxy-aogdf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-aogdf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-aogdf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32089

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821908700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-u4hoz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-u4hoz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-u4hoz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821750b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-g7art/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-g7art/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-g7art/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27703 #32981 #35286

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821138200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-scheduledjob-y5y3c/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-y5y3c/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-y5y3c/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821920a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-yy0qm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-yy0qm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-yy0qm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82127f580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-zbokq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-zbokq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-zbokq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Mesos schedules pods annotated with roles on correct slaves {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821751a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-rlmyp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-rlmyp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-rlmyp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821526e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-j5a6p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-j5a6p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-j5a6p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30352

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821076f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-svc-latency-x1ry1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-svc-latency-x1ry1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-svc-latency-x1ry1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30632

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820fbb500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-0s48m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-0s48m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-0s48m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821484200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-zve0a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-zve0a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-zve0a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #33987

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821650800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-scheduledjob-fmzy1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-fmzy1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-fmzy1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821782300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sysctl-dj1ss/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sysctl-dj1ss/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sysctl-dj1ss/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821751e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-job-erx49/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-erx49/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-erx49/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821464400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-init-container-h1pia/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-h1pia/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-h1pia/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31408

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820c84e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-7jsob/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-7jsob/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-7jsob/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821921500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-v1job-9kj2z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-9kj2z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-9kj2z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e76c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/82/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 10:49:36.894: Couldn't delete ns: "e2e-tests-kubectl-5n693": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-5n693/pods\"") has prevented the request from succeeding (get pods) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-5n693/pods\\\"\") has prevented the request from succeeding (get pods)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820792eb0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1221
Dec  8 11:59:23.291: Failed to update Service "lb-sourcerange": Service "lb-sourcerange" is invalid: spec.loadBalancerSourceRanges: Invalid value: ["10.72.3.194/32"]: field is immutable
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1991

Issues about this test specifically: #38174

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 10:45:30.059: Couldn't delete ns: "e2e-tests-namespaces-0k928": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-namespaces-0k928/replicasets\"") has prevented the request from succeeding (get replicasets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-namespaces-0k928/replicasets\\\"\") has prevented the request from succeeding (get replicasets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821585810), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27957

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:333
Expected error:
    <*errors.StatusError | 0xc82103c280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-services-dk9ow/pods/execpod-6dyoc\\\"\") has prevented the request from succeeding (delete pods execpod-6dyoc)",
            Reason: "InternalError",
            Details: {
                Name: "execpod-6dyoc",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-dk9ow/pods/execpod-6dyoc\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-dk9ow/pods/execpod-6dyoc\"") has prevented the request from succeeding (delete pods execpod-6dyoc)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1443

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 10:49:26.856: Couldn't delete ns: "e2e-tests-init-container-qcyv5": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-init-container-qcyv5/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-init-container-qcyv5/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82190b130), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #31408

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/83/

Multiple broken tests:

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 18 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-a859de97  n1-standard-2               2016-12-08T12:08:08.736-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-a859de97-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-a859de97-434n  us-central1-f  n1-standard-2               10.240.0.4   104.154.187.83  RUNNING
+gke-bootstrap-e2e-default-pool-a859de97-ky5y  us-central1-f  n1-standard-2               10.240.0.3   108.59.80.111   RUNNING
+gke-bootstrap-e2e-default-pool-a859de97-njks  us-central1-f  n1-standard-2               10.240.0.2   104.154.198.43  RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-a859de97-434n  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-a859de97-ky5y  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-a859de97-njks  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-10ee1b0c-764bab88-bd82-11e6-976d-42010af00024  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-a859de97-njks  1000
+gke-bootstrap-e2e-10ee1b0c-76dda4d5-bd82-11e6-976d-42010af00024  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-a859de97-434n  1000
+gke-bootstrap-e2e-10ee1b0c-7786dedd-bd82-11e6-976d-42010af00024  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-a859de97-ky5y  1000
+gke-bootstrap-e2e-10ee1b0c-all  bootstrap-e2e  10.72.0.0/14     udp,icmp,esp,ah,sctp,tcp
+gke-bootstrap-e2e-10ee1b0c-ssh  bootstrap-e2e  35.184.38.43/32  tcp:22                                  gke-bootstrap-e2e-10ee1b0c-node
+gke-bootstrap-e2e-10ee1b0c-vms  bootstrap-e2e  10.240.0.0/16    tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-10ee1b0c-node

Issues about this test specifically: #33373 #33416 #34060

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/84/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 00:48:20.431: Couldn't delete ns: "e2e-tests-kubectl-lnu9w": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-lnu9w/podtemplates\"") has prevented the request from succeeding (get podtemplates) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-lnu9w/podtemplates\\\"\") has prevented the request from succeeding (get podtemplates)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821144820), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc821388a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-yxoyx/replicationcontrollers/rc-light-ctrl\\\"\") has prevented the request from succeeding (get replicationControllers rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "replicationControllers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-yxoyx/replicationcontrollers/rc-light-ctrl\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-yxoyx/replicationcontrollers/rc-light-ctrl\"") has prevented the request from succeeding (get replicationControllers rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:307

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:688
getting pod back-off-cap
Expected error:
    <*errors.StatusError | 0xc821319600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-idntw/pods/back-off-cap\\\"\") has prevented the request from succeeding (get pods back-off-cap)",
            Reason: "InternalError",
            Details: {
                Name: "back-off-cap",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-idntw/pods/back-off-cap\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-idntw/pods/back-off-cap\"") has prevented the request from succeeding (get pods back-off-cap)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:105

Issues about this test specifically: #27703 #32981 #35286

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 23:17:12.311: Couldn't delete ns: "e2e-tests-v1job-chnxe": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-v1job-chnxe/endpoints\"") has prevented the request from succeeding (get endpoints) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-v1job-chnxe/endpoints\\\"\") has prevented the request from succeeding (get endpoints)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820ad8460), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 20:51:07.593: All nodes should be ready after test, an error on the server ("Internal Server Error: \"/api/v1/nodes\"") has prevented the request from succeeding (get nodes)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:418

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1221
Dec  8 20:53:04.175: Failed to update Service "lb-sourcerange": Service "lb-sourcerange" is invalid: spec.loadBalancerSourceRanges: Invalid value: ["10.72.0.12/32"]: field is immutable
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1991

Issues about this test specifically: #38174

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:87
Expected error:
    <*errors.StatusError | 0xc820ca7200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard\\\"\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard\"") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:85

Issues about this test specifically: #26191

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 01:24:01.807: Couldn't delete ns: "e2e-tests-resize-nodes-0lanb": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resize-nodes-0lanb\"") has prevented the request from succeeding (delete namespaces e2e-tests-resize-nodes-0lanb) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resize-nodes-0lanb\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-resize-nodes-0lanb)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821b44280), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #30187 #35293 #35845

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82152f720>: {
        s: "Unable to get server version: an error on the server (\"Internal Server Error: \\\"/version\\\"\") has prevented the request from succeeding",
    }
    Unable to get server version: an error on the server ("Internal Server Error: \"/version\"") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node capacity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 22:25:19.400: Couldn't delete ns: "e2e-tests-downward-api-sz009": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-sz009/serviceaccounts\"") has prevented the request from succeeding (get serviceaccounts) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-sz009/serviceaccounts\\\"\") has prevented the request from succeeding (get serviceaccounts)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8216487d0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28065 #38450

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821de8ae0>: {
        s: "Namespace e2e-tests-resize-nodes-0lanb is active",
    }
    Namespace e2e-tests-resize-nodes-0lanb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:1236
Expected error:
    <*errors.StatusError | 0xc820cf1a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-sched-pred-j57qd/pods/with-podantiaffinity-2c62eddb-bdd8-11e6-89f6-0242ac110009\\\"\") has prevented the request from succeeding (get pods with-podantiaffinity-2c62eddb-bdd8-11e6-89f6-0242ac110009)",
            Reason: "InternalError",
            Details: {
                Name: "with-podantiaffinity-2c62eddb-bdd8-11e6-89f6-0242ac110009",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-sched-pred-j57qd/pods/with-podantiaffinity-2c62eddb-bdd8-11e6-89f6-0242ac110009\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-sched-pred-j57qd/pods/with-podantiaffinity-2c62eddb-bdd8-11e6-89f6-0242ac110009\"") has prevented the request from succeeding (get pods with-podantiaffinity-2c62eddb-bdd8-11e6-89f6-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:1234

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:287
Dec  8 23:14:37.254: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] Generated release_1_2 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 01:52:30.162: Couldn't delete ns: "e2e-tests-clientset-1lsps": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-clientset-1lsps/events\"") has prevented the request from succeeding (get events) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-clientset-1lsps/events\\\"\") has prevented the request from succeeding (get events)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82060c9b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32043 #35580

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:100
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc8215e8800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-var-expansion-bt2jz/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-var-expansion-bt2jz/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-var-expansion-bt2jz/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 00:50:12.616: Couldn't delete ns: "e2e-tests-kubectl-rd9hj": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-rd9hj/events\"") has prevented the request from succeeding (get events) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-rd9hj/events\\\"\") has prevented the request from succeeding (get events)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821d95310), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29710

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8216ea5b0>: {
        s: "Namespace e2e-tests-resize-nodes-0lanb is active",
    }
    Namespace e2e-tests-resize-nodes-0lanb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820eec490>: {
        s: "Namespace e2e-tests-resize-nodes-0lanb is active",
    }
    Namespace e2e-tests-resize-nodes-0lanb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8211da6b0>: {
        s: "Unable to get server version: an error on the server (\"Internal Server Error: \\\"/version\\\"\") has prevented the request from succeeding",
    }
    Unable to get server version: an error on the server ("Internal Server Error: \"/version\"") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #31918

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428
Expected error:
    <*errors.errorString | 0xc8210ba0e0>: {
        s: "Error creating replication controller: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-services-ktuuy/replicationcontrollers\\\"\") has prevented the request from succeeding (post replicationControllers)",
    }
    Error creating replication controller: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-ktuuy/replicationcontrollers\"") has prevented the request from succeeding (post replicationControllers)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] EmptyDir wrapper volumes should becomes running {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 00:48:14.452: Couldn't delete ns: "e2e-tests-emptydir-wrapper-kpw0t": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-wrapper-kpw0t/endpoints\"") has prevented the request from succeeding (get endpoints) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-wrapper-kpw0t/endpoints\\\"\") has prevented the request from succeeding (get endpoints)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821f4dd60), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28450

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:983
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.58.70 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-rc --image=gcr.io/google_containers/nginx-slim:0.7 --generator=run/v1 --namespace=e2e-tests-kubectl-94dq7] []  <nil>  Error from server: an error on the server (\"Internal Server Error: \\\"/apis\\\"\") has prevented the request from succeeding\n [] <nil> 0xc8213f2d60 exit status 1 <nil> true [0xc820038268 0xc8200382c0 0xc820038330] [0xc820038268 0xc8200382c0 0xc820038330] [0xc820038298 0xc820038308] [0xafa830 0xafa830] 0xc821559c20}:\nCommand stdout:\n\nstderr:\nError from server: an error on the server (\"Internal Server Error: \\\"/apis\\\"\") has prevented the request from succeeding\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.58.70 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-rc --image=gcr.io/google_containers/nginx-slim:0.7 --generator=run/v1 --namespace=e2e-tests-kubectl-94dq7] []  <nil>  Error from server: an error on the server ("Internal Server Error: \"/apis\"") has prevented the request from succeeding
     [] <nil> 0xc8213f2d60 exit status 1 <nil> true [0xc820038268 0xc8200382c0 0xc820038330] [0xc820038268 0xc8200382c0 0xc820038330] [0xc820038298 0xc820038308] [0xafa830 0xafa830] 0xc821559c20}:
    Command stdout:
    
    stderr:
    Error from server: an error on the server ("Internal Server Error: \"/apis\"") has prevented the request from succeeding
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 23:17:40.381: Couldn't delete ns: "e2e-tests-replicaset-e1w51": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-replicaset-e1w51/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-replicaset-e1w51/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82162ca00), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32023

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:256
Failed to delete host0ROPod
Expected error:
    <*errors.StatusError | 0xc820e35800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pod-disks-zhx2y/pods/pd-test-e8c8abef-bdd8-11e6-89f6-0242ac110009\\\"\") has prevented the request from succeeding (delete pods pd-test-e8c8abef-bdd8-11e6-89f6-0242ac110009)",
            Reason: "InternalError",
            Details: {
                Name: "pd-test-e8c8abef-bdd8-11e6-89f6-0242ac110009",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-disks-zhx2y/pods/pd-test-e8c8abef-bdd8-11e6-89f6-0242ac110009\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-disks-zhx2y/pods/pd-test-e8c8abef-bdd8-11e6-89f6-0242ac110009\"") has prevented the request from succeeding (delete pods pd-test-e8c8abef-bdd8-11e6-89f6-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:248

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 01:22:59.780: Couldn't delete ns: "e2e-tests-horizontal-pod-autoscaling-f9s6s": an error on the server ("Internal Server Error: \"/api\"") has prevented the request from succeeding (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api\\\"\") has prevented the request from succeeding", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821204eb0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 22:42:47.456: Couldn't delete ns: "e2e-tests-v1job-vp2qv": an error on the server ("Internal Server Error: \"/api\"") has prevented the request from succeeding (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api\\\"\") has prevented the request from succeeding", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820d66050), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #37427

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.StatusError | 0xc82110b380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-job-yl9y7/jobs/foo\\\"\") has prevented the request from succeeding (get jobs.extensions foo)",
            Reason: "InternalError",
            Details: {
                Name: "foo",
                Group: "extensions",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-job-yl9y7/jobs/foo\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-job-yl9y7/jobs/foo\"") has prevented the request from succeeding (get jobs.extensions foo)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 23:30:22.378: Couldn't delete ns: "e2e-tests-containers-eyp2q": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-containers-eyp2q/resourcequotas\"") has prevented the request from succeeding (get resourcequotas) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-containers-eyp2q/resourcequotas\\\"\") has prevented the request from succeeding (get resourcequotas)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821f4ccd0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29994

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  8 22:14:29.521: Couldn't delete ns: "e2e-tests-pod-disks-pd20i": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-disks-pd20i/events\"") has prevented the request from succeeding (get events) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pod-disks-pd20i/events\\\"\") has prevented the request from succeeding (get events)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820c30780), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Dec  8 22:31:57.739: Failed to create netserver-0 pod: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-nettest-vl84c/pods\"") has prevented the request from succeeding (post pods)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:491

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821a735d0>: {
        s: "Namespace e2e-tests-resize-nodes-0lanb is active",
    }
    Namespace e2e-tests-resize-nodes-0lanb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.StatusError | 0xc8217b1280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-ybm4l/replicationcontrollers/rc\\\"\") has prevented the request from succeeding (get replicationControllers rc)",
            Reason: "InternalError",
            Details: {
                Name: "rc",
                Group: "",
                Kind: "replicationControllers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-ybm4l/replicationcontrollers/rc\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-ybm4l/replicationcontrollers/rc\"") has prevented the request from succeeding (get replicationControllers rc)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:249

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Expected error:
    <*errors.StatusError | 0xc821a03880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-tarw2/services/test-deployment\\\"\") has prevented the request from succeeding (delete services test-deployment)",
            Reason: "InternalError",
            Details: {
                Name: "test-deployment",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-tarw2/services/test-deployment\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-tarw2/services/test-deployment\"") has prevented the request from succeeding (delete services test-deployment)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:306

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:481
Expected error:
    <*errors.errorString | 0xc82166ea40>: {
        s: "failed to wait for pods responding: Unable to get server version: the server has asked for the client to provide credentials",
    }
    failed to wait for pods responding: Unable to get server version: the server has asked for the client to provide credentials
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:480

Issues about this test specifically: #27470 #30156 #34304 #37620

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/85/

Multiple broken tests:

Failed: TearDown {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 18 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-7d53c553  n1-standard-2               2016-12-09T02:18:32.604-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-7d53c553-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-7d53c553-dhvx  us-central1-f  n1-standard-2               10.240.0.2   130.211.209.57  RUNNING
+gke-bootstrap-e2e-default-pool-7d53c553-e6j6  us-central1-f  n1-standard-2               10.240.0.3   104.154.203.10  RUNNING
+gke-bootstrap-e2e-default-pool-7d53c553-xgbs  us-central1-f  n1-standard-2               10.240.0.4   146.148.32.202  RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-7d53c553-dhvx  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-7d53c553-e6j6  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-7d53c553-xgbs  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-51026f52-33281cdd-bdf9-11e6-a1af-42010af00024  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-7d53c553-xgbs  1000
+gke-bootstrap-e2e-51026f52-33bbab6c-bdf9-11e6-a1af-42010af00024  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-7d53c553-e6j6  1000
+gke-bootstrap-e2e-51026f52-33f809fa-bdf9-11e6-a1af-42010af00024  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-7d53c553-dhvx  1000
+gke-bootstrap-e2e-51026f52-all  bootstrap-e2e  10.72.0.0/14     tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-51026f52-ssh  bootstrap-e2e  35.184.38.43/32  tcp:22                                  gke-bootstrap-e2e-51026f52-node
+gke-bootstrap-e2e-51026f52-vms  bootstrap-e2e  10.240.0.0/16    icmp,tcp:1-65535,udp:1-65535            gke-bootstrap-e2e-51026f52-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/86/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82146a7d0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #33883

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 11:29:28.153: Couldn't delete ns: "e2e-tests-kubectl-8jfp7": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-8jfp7/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-8jfp7/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8217b2870), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29710

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:456
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.142.183 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-ui68y run run-test-2 --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --leave-stdin-open=true -- sh -c cat && echo 'stdin closed'] []  0xc8203c56c0 Waiting for pod e2e-tests-kubectl-ui68y/run-test-2-ur6ga to be running, status is Pending, pod ready: false\n error: 500 Internal Server Error while accessing https://104.154.142.183/api/v1/namespaces/e2e-tests-kubectl-ui68y/pods/run-test-2-ur6ga/log?container=run-test-2: Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-ui68y/pods/run-test-2-ur6ga/log?container=run-test-2\"\n [] <nil> 0xc8203c5fc0 exit status 1 <nil> true [0xc820d4a848 0xc820d4a890 0xc820d4a8a8] [0xc820d4a848 0xc820d4a890 0xc820d4a8a8] [0xc820d4a858 0xc820d4a880 0xc820d4a8a0] [0xafa6d0 0xafa830 0xafa830] 0xc821b0b860}:\nCommand stdout:\nWaiting for pod e2e-tests-kubectl-ui68y/run-test-2-ur6ga to be running, status is Pending, pod ready: false\n\nstderr:\nerror: 500 Internal Server Error while accessing https://104.154.142.183/api/v1/namespaces/e2e-tests-kubectl-ui68y/pods/run-test-2-ur6ga/log?container=run-test-2: Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-ui68y/pods/run-test-2-ur6ga/log?container=run-test-2\"\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.142.183 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-ui68y run run-test-2 --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --leave-stdin-open=true -- sh -c cat && echo 'stdin closed'] []  0xc8203c56c0 Waiting for pod e2e-tests-kubectl-ui68y/run-test-2-ur6ga to be running, status is Pending, pod ready: false
     error: 500 Internal Server Error while accessing https://104.154.142.183/api/v1/namespaces/e2e-tests-kubectl-ui68y/pods/run-test-2-ur6ga/log?container=run-test-2: Internal Server Error: "/api/v1/namespaces/e2e-tests-kubectl-ui68y/pods/run-test-2-ur6ga/log?container=run-test-2"
     [] <nil> 0xc8203c5fc0 exit status 1 <nil> true [0xc820d4a848 0xc820d4a890 0xc820d4a8a8] [0xc820d4a848 0xc820d4a890 0xc820d4a8a8] [0xc820d4a858 0xc820d4a880 0xc820d4a8a0] [0xafa6d0 0xafa830 0xafa830] 0xc821b0b860}:
    Command stdout:
    Waiting for pod e2e-tests-kubectl-ui68y/run-test-2-ur6ga to be running, status is Pending, pod ready: false
    
    stderr:
    error: 500 Internal Server Error while accessing https://104.154.142.183/api/v1/namespaces/e2e-tests-kubectl-ui68y/pods/run-test-2-ur6ga/log?container=run-test-2: Internal Server Error: "/api/v1/namespaces/e2e-tests-kubectl-ui68y/pods/run-test-2-ur6ga/log?container=run-test-2"
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 13:11:00.618: Couldn't delete ns: "e2e-tests-job-5hec6": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-job-5hec6/configmaps\"") has prevented the request from succeeding (get configmaps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-job-5hec6/configmaps\\\"\") has prevented the request from succeeding (get configmaps)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82178b5e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1221
Dec  9 14:10:20.762: Failed to update Service "lb-sourcerange": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-vh2km/services/lb-sourcerange\"") has prevented the request from succeeding (put services lb-sourcerange)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1991

Issues about this test specifically: #38174

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820d28db0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821acf8d0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: [k8s.io] Downward API volume should provide container's cpu request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 12:08:48.459: Couldn't delete ns: "e2e-tests-downward-api-325km": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-325km/limitranges\"") has prevented the request from succeeding (get limitranges) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-325km/limitranges\\\"\") has prevented the request from succeeding (get limitranges)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820bb0190), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821121e50>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28071

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 11:25:20.670: Couldn't delete ns: "e2e-tests-container-probe-aku69": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-aku69/replicasets\"") has prevented the request from succeeding (get replicasets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-aku69/replicasets\\\"\") has prevented the request from succeeding (get replicasets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82184da40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #37914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8211a90f0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:688
getting pod back-off-cap
Expected error:
    <*errors.StatusError | 0xc821b04180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-9c0ud/pods/back-off-cap\\\"\") has prevented the request from succeeding (get pods back-off-cap)",
            Reason: "InternalError",
            Details: {
                Name: "back-off-cap",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-9c0ud/pods/back-off-cap\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-9c0ud/pods/back-off-cap\"") has prevented the request from succeeding (get pods back-off-cap)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:105

Issues about this test specifically: #27703 #32981 #35286

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:421
Expected error:
    <*exec.ExitError | 0xc8210fa780>: {
        ProcessState: {
            pid: 3381,
            status: 256,
            rusage: {
                Utime: {Sec: 0, Usec: 80000},
                Stime: {Sec: 0, Usec: 20000},
                Maxrss: 35068,
                Ixrss: 0,
                Idrss: 0,
                Isrss: 0,
                Minflt: 2052,
                Majflt: 0,
                Nswap: 0,
                Inblock: 0,
                Oublock: 0,
                Msgsnd: 0,
                Msgrcv: 0,
                Nsignals: 0,
                Nvcsw: 470,
                Nivcsw: 53,
            },
        },
        Stderr: nil,
    }
    exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:406

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821590030>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820c2b6d0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Expected error:
    <*errors.StatusError | 0xc8215bb000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get replicationControllers rc)",
            Reason: "Unauthorized",
            Details: {
                Name: "rc",
                Group: "",
                Kind: "replicationControllers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get replicationControllers rc)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:249

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 13:53:32.104: Couldn't delete ns: "e2e-tests-resourcequota-1oyod": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-resourcequota-1oyod/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-resourcequota-1oyod/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821602780), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #34372

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:73
Expected error:
    <*errors.StatusError | 0xc8215e6780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-container-probe-9s21n/pods/test-webserver-c180c3b5-be53-11e6-9ca6-0242ac110009\\\"\") has prevented the request from succeeding (get pods test-webserver-c180c3b5-be53-11e6-9ca6-0242ac110009)",
            Reason: "InternalError",
            Details: {
                Name: "test-webserver-c180c3b5-be53-11e6-9ca6-0242ac110009",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-container-probe-9s21n/pods/test-webserver-c180c3b5-be53-11e6-9ca6-0242ac110009\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-container-probe-9s21n/pods/test-webserver-c180c3b5-be53-11e6-9ca6-0242ac110009\"") has prevented the request from succeeding (get pods test-webserver-c180c3b5-be53-11e6-9ca6-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53

Issues about this test specifically: #29521

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 13:52:46.811: Couldn't delete ns: "e2e-tests-limitrange-lmpdq": the server has asked for the client to provide credentials (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server has asked for the client to provide credentials", Reason:"Unauthorized", Details:(*unversioned.StatusDetails)(0xc8217b3900), Code:401}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27503

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 11:26:14.597: Couldn't delete ns: "e2e-tests-proxy-bn0r7": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-proxy-bn0r7\"") has prevented the request from succeeding (delete namespaces e2e-tests-proxy-bn0r7) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-proxy-bn0r7\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-proxy-bn0r7)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82184c8c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.StatusError | 0xc821b48400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get jobs.batch foo)",
            Reason: "Unauthorized",
            Details: {
                Name: "foo",
                Group: "batch",
                Kind: "jobs",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get jobs.batch foo)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 14:13:03.925: Couldn't delete ns: "e2e-tests-emptydir-lq448": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-emptydir-lq448/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-emptydir-lq448/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821724c80), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #37500

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8205678d0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821056820>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82123a320>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821063240>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #31918

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8210ef700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-zsrcm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-zsrcm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-zsrcm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 13:53:20.658: Couldn't delete ns: "e2e-tests-kubectl-jzyu3": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-jzyu3\"") has prevented the request from succeeding (delete namespaces e2e-tests-kubectl-jzyu3) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-jzyu3\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-kubectl-jzyu3)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8202ef130), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:135
Expected error:
    <*errors.StatusError | 0xc821b04100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (delete persistentVolumeClaims pvc-99d4u)",
            Reason: "Unauthorized",
            Details: {
                Name: "pvc-99d4u",
                Group: "",
                Kind: "persistentVolumeClaims",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (delete persistentVolumeClaims pvc-99d4u)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:98

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821546bd0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820d282c0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #34223

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec  9 11:29:22.805: Couldn't delete ns: "e2e-tests-job-wx8v8": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-job-wx8v8/replicationcontrollers\"") has prevented the request from succeeding (get replicationcontrollers) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-job-wx8v8/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationcontrollers)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82184c050), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #31938

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8211ebdb0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821841080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sched-pred-qv0rz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-qv0rz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-qv0rz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82121efe0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821684510>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-zsrcm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27655 #33876

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/90/

Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 20 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-489fe14a  n1-standard-2               2016-12-10T13:47:24.786-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-489fe14a-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-489fe14a-ec6i  us-central1-f  n1-standard-2               10.240.0.3   104.155.187.184  RUNNING
+gke-bootstrap-e2e-default-pool-489fe14a-x9hc  us-central1-f  n1-standard-2               10.240.0.4   104.198.151.161  RUNNING
+gke-bootstrap-e2e-default-pool-489fe14a-xtqt  us-central1-f  n1-standard-2               10.240.0.2   104.198.61.191   RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-489fe14a-ec6i  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-489fe14a-x9hc  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-489fe14a-xtqt  us-central1-f  100      pd-standard  READY
+default-route-b54fdb972784edae                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+default-route-c9db8bfe4fa1b01e                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+gke-bootstrap-e2e-d64f525f-b415cece-bf22-11e6-89ec-42010af00009  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-489fe14a-x9hc  1000
+gke-bootstrap-e2e-d64f525f-b4b964a9-bf22-11e6-89ec-42010af00009  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-489fe14a-ec6i  1000
+gke-bootstrap-e2e-d64f525f-ea617de3-bf42-11e6-89ec-42010af00009  bootstrap-e2e  10.72.3.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-489fe14a-xtqt  1000
+gke-bootstrap-e2e-d64f525f-all  bootstrap-e2e  10.72.0.0/14     sctp,tcp,udp,icmp,esp,ah
+gke-bootstrap-e2e-d64f525f-ssh  bootstrap-e2e  8.35.199.243/32  tcp:22                                  gke-bootstrap-e2e-d64f525f-node
+gke-bootstrap-e2e-d64f525f-vms  bootstrap-e2e  10.240.0.0/16    tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-d64f525f-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/92/

Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42038cb40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc42038cb40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc42038cb40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1098
Expected error:
    <*errors.errorString | 0xc423038010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #26172

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:74
Expected error:
    <*errors.errorString | 0xc4230422b0>: {
        s: "want pod 'test-webserver-4c34f796-bfde-11e6-babc-0242ac110008' on 'gke-bootstrap-e2e-default-pool-b8a12c47-kbbl' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-4c34f796-bfde-11e6-babc-0242ac110008' on 'gke-bootstrap-e2e-default-pool-b8a12c47-kbbl' to be 'Running' but was 'Pending'
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:56

Issues about this test specifically: #29521

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42038cb40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/95/

Multiple broken tests:

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc420390950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4214fcb10>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc4236ecec0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc420390950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421b14f00>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422e3e480>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc420390950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc420390950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:73

Issues about this test specifically: #28346

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Expected error:
    <*errors.errorString | 0xc420390950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #33985

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc422e1a000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/143/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820bb0780>: {
        s: "Namespace e2e-tests-daemonrestart-qg0n1 is active",
    }
    Namespace e2e-tests-daemonrestart-qg0n1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8221f9c90>: {
        s: "Namespace e2e-tests-daemonrestart-qg0n1 is active",
    }
    Namespace e2e-tests-daemonrestart-qg0n1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820b92c30>: {
        s: "Namespace e2e-tests-daemonrestart-qg0n1 is active",
    }
    Namespace e2e-tests-daemonrestart-qg0n1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8209ea720>: {
        s: "Namespace e2e-tests-daemonrestart-qg0n1 is active",
    }
    Namespace e2e-tests-daemonrestart-qg0n1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc822016990>: {
        s: "Namespace e2e-tests-daemonrestart-qg0n1 is active",
    }
    Namespace e2e-tests-daemonrestart-qg0n1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821604e40>: {
        s: "Namespace e2e-tests-daemonrestart-qg0n1 is active",
    }
    Namespace e2e-tests-daemonrestart-qg0n1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8217ae9d0>: {
        s: "Namespace e2e-tests-daemonrestart-qg0n1 is active",
    }
    Namespace e2e-tests-daemonrestart-qg0n1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #30078 #30142

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/165/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8216304c0>: {
        s: "Namespace e2e-tests-nettest-8lpqm is active",
    }
    Namespace e2e-tests-nettest-8lpqm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27655 #33876

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:167
Expected error:
    <*errors.errorString | 0xc820176b40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #34104

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821ddcec0>: {
        s: "Namespace e2e-tests-nettest-8lpqm is active",
    }
    Namespace e2e-tests-nettest-8lpqm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821ad6db0>: {
        s: "Namespace e2e-tests-nettest-8lpqm is active",
    }
    Namespace e2e-tests-nettest-8lpqm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821b02af0>: {
        s: "Namespace e2e-tests-nettest-8lpqm is active",
    }
    Namespace e2e-tests-nettest-8lpqm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8205f8c00>: {
        s: "Namespace e2e-tests-nettest-8lpqm is active",
    }
    Namespace e2e-tests-nettest-8lpqm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82082c780>: {
        s: "Namespace e2e-tests-nettest-8lpqm is active",
    }
    Namespace e2e-tests-nettest-8lpqm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28853 #31585

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/203/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421feb690>: {
        s: "Namespace e2e-tests-services-585w9 is active",
    }
    Namespace e2e-tests-services-585w9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217797a0>: {
        s: "Namespace e2e-tests-services-585w9 is active",
    }
    Namespace e2e-tests-services-585w9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc421c57cc0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 157, 108],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.198.157.108:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222455a0>: {
        s: "Namespace e2e-tests-services-585w9 is active",
    }
    Namespace e2e-tests-services-585w9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218e2f80>: {
        s: "Namespace e2e-tests-services-585w9 is active",
    }
    Namespace e2e-tests-services-585w9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/238/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82141b980>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8214b8360>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820607650>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8215efa80>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #35279

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821dacf70>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82215a880>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821dad670>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428
Expected error:
    <*net.OpError | 0xc820a8db80>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\x9b\x81f",
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.155.129.102:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:546
Expected error:
    <*errors.errorString | 0xc821112670>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:338

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821354810>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8216fb170>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821c8d980>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82182ec20>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8215e5170>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821269770>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820ea74b0>: {
        s: "Namespace e2e-tests-services-pbz9a is active",
    }
    Namespace e2e-tests-services-pbz9a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/254/
Multiple broken tests:

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82085ee00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-init-container-pgrnv/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-pgrnv/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-pgrnv/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31408

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821254580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-resourcequota-w4sv7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-w4sv7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-w4sv7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34372

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821454780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-v1job-b8304/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-b8304/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-b8304/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29657

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821740b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-rgrr3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-rgrr3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-rgrr3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34658

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82147a280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-v1job-e23fz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-e23fz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-e23fz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8213d7780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-namespaces-6tpva/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-namespaces-6tpva/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-namespaces-6tpva/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27957

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820b8a000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-n8i50/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-n8i50/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-n8i50/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #34250

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821332e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-cadvisor-kz9ud/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-cadvisor-kz9ud/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-cadvisor-kz9ud/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32371

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820952400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-sl67r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-sl67r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-sl67r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82124fb00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-4x81r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-4x81r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-4x81r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #33285

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821b2ab00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-5agkf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-5agkf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-5agkf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #33631 #33995 #34970

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821932900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-53co4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-53co4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-53co4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29513

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8212d8300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-scheduledjob-ugbqb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-ugbqb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-ugbqb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820dbe980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-var-expansion-4jxei/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-4jxei/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-4jxei/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28503

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8210e4500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-kxgvt/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-kxgvt/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-kxgvt/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #33985

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820fc6200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-scheduledjob-bf1oc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-bf1oc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-bf1oc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821b86c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-network-i7dkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-network-i7dkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-network-i7dkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820b8a100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sched-pred-cpohq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-cpohq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-cpohq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821353080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-o86lc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-o86lc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-o86lc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #35579

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820099c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-job-3c0x4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-3c0x4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-3c0x4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28003

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc82085e800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sched-pred-b42q0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-b42q0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-b42q0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821a2fa80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-scheduledjob-2y3k2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-2y3k2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-2y3k2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821042e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-7u1s4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-7u1s4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-7u1s4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29052

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8218a0680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-9knc8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-9knc8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-9knc8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #37423

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821a2ed80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-5ky6i/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-5ky6i/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-5ky6i/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e36f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-secrets-lclkj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-lclkj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-lclkj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8212d8780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-ic9yb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-ic9yb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-ic9yb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Mesos applies slave attributes as labels {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820dcd380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-0t2mu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-0t2mu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-0t2mu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28359

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820ecb800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-nv85q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-nv85q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-nv85q/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #38308

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821098b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-container-probe-h0w7o/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-h0w7o/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-h0w7o/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29521

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821932d80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-smi2n/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-smi2n/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-smi2n/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 23 13:26:40.040: Couldn't delete ns: "e2e-tests-kubectl-ckyu3": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-ckyu3/replicationcontrollers\"") has prevented the request from succeeding (get replicationcontrollers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-ckyu3/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationcontrollers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8203af3b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8218a0f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-prestop-8bmuw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-prestop-8bmuw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-prestop-8bmuw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8215de700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-k9vy8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-k9vy8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-k9vy8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc821867380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-container-probe-60lmr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-60lmr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-60lmr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28084

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820e7a080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-5nqjf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-5nqjf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-5nqjf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8218cdb00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-daemonsets-8vt9j/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-daemonsets-8vt9j/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-daemonsets-8vt9j/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31428

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820b06700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pod-disks-xedhq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pod-disks-xedhq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pod-disks-xedhq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8218a0000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-replicaset-r0yc0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replicaset-r0yc0/serviceaccounts?fieldSelec

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/269/
Multiple broken tests:

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 01:00:20.426: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212ee4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 00:53:29.614: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e518f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 00:47:12.009: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e504f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 01:36:29.287: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422462ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 00:13:21.831: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221db8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 00:56:35.982: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420230ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 01:22:42.872: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a8f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32945

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 01:25:56.401: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221104f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37439

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 23:27:13.228: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217338f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc422a0a970>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-4c15f583-zjd7 boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-4c15f583-zjd7 boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 23:30:43.045: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422128ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 23:33:54.080: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212658f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 23:43:33.320: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211878f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e34220>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4c15f583-zjd7 gke-bootstrap-e2e-default-pool-4c15f583-zjd7 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:55:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:56:39 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:55:57 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-4c15f583-zjd7            gke-bootstrap-e2e-default-pool-4c15f583-zjd7 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:55:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:55:59 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:55:57 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4c15f583-zjd7 gke-bootstrap-e2e-default-pool-4c15f583-zjd7 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:55:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:56:39 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:55:57 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-4c15f583-zjd7            gke-bootstrap-e2e-default-pool-4c15f583-zjd7 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:55:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:55:59 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 22:55:57 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 00:03:40.684: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213958f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc4203aae50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 00:50:20.469: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a8f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31400

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 23:37:08.734: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218538f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 01:04:01.933: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ded8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 23:50:22.951: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420230ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 02:15:28.679: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222d04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 23:57:13.616: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217cd8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 02:08:53.313: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42303aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4203aae50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33285

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 01:39:37.681: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421de84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 23:40:22.144: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420afe4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 01:32:46.016: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bb4ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 23:23:59.297: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212658f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 01:42:54.993: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422bfa4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29831

Failed: [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 00:43:45.480: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42138e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27703 #32981 #35286

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 00:00:27.132: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421152ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 00:10:06.499: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422582ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 02:12:01.716: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224544f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 01:29:17.777: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d024f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/270/
Multiple broken tests:

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc420a0e280>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 04:11:29 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 04:11:59 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 04:11:29 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.72.0.25 StartTime:2017-01-26 04:11:29 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4212c6a10} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://7ae6f2d6cadd3956a16b28411f2bd41fb7fcebf66158d3339dcc5a3f6b677927}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 04:11:29 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 04:11:59 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 04:11:29 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.72.0.25 StartTime:2017-01-26 04:11:29 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4212c6a10} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://7ae6f2d6cadd3956a16b28411f2bd41fb7fcebf66158d3339dcc5a3f6b677927}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc4203a7710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Jan 26 05:54:15.715: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Jan 26 09:11:39.270: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:629
Jan 26 06:00:01.684: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:626

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Jan 26 08:16:27.602: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203a7710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Jan 26 05:27:09.795: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 26 10:23:05.248: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 26 09:40:00.350: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1580

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203a7710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203a7710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan 26 07:07:47.974: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Jan 26 08:49:59.964: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 26 10:07:17.021: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/273/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422cc1c30>: {
        s: "Namespace e2e-tests-services-67q27 is active",
    }
    Namespace e2e-tests-services-67q27 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42187e540>: {
        s: "Namespace e2e-tests-services-67q27 is active",
    }
    Namespace e2e-tests-services-67q27 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f0cf20>: {
        s: "Namespace e2e-tests-services-67q27 is active",
    }
    Namespace e2e-tests-services-67q27 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 27 05:47:35.053: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:808
Jan 27 07:26:57.670: Could not reach HTTP service through 104.197.148.23:32756 after 5m0s: received non-success return status "404 Not Found" trying to access http://104.197.148.23:32756/echo?msg=hello; got body: default backend - 404
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2530

Issues about this test specifically: #26134

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc422be2000>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 197, 100, 199],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.197.100.199:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:417

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42256a220>: {
        s: "Namespace e2e-tests-services-67q27 is active",
    }
    Namespace e2e-tests-services-67q27 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/274/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:48
Expected error:
    <*errors.errorString | 0xc4212011c0>: {
        s: "expected pod \"downwardapi-volume-651fda36-e4d3-11e6-8253-0242ac110005\" success: gave up waiting for pod 'downwardapi-volume-651fda36-e4d3-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-651fda36-e4d3-11e6-8253-0242ac110005" success: gave up waiting for pod 'downwardapi-volume-651fda36-e4d3-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31836

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
Expected error:
    <*errors.errorString | 0xc42165f610>: {
        s: "expected pod \"\" success: gave up waiting for pod 'pod-service-account-a941e3af-e4d2-11e6-8253-0242ac110005-pnv9b' to be 'success or failure' after 5m0s",
    }
    expected pod "" success: gave up waiting for pod 'pod-service-account-a941e3af-e4d2-11e6-8253-0242ac110005-pnv9b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37526

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc420d2c5e0>: {
        s: "expected pod \"pod-5a7f2adc-e4cc-11e6-8253-0242ac110005\" success: gave up waiting for pod 'pod-5a7f2adc-e4cc-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-5a7f2adc-e4cc-11e6-8253-0242ac110005" success: gave up waiting for pod 'pod-5a7f2adc-e4cc-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:37
Expected error:
    <*errors.errorString | 0xc4219b3fb0>: {
        s: "expected pod \"pod-configmaps-c454d377-e4d0-11e6-8253-0242ac110005\" success: gave up waiting for pod 'pod-configmaps-c454d377-e4d0-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-c454d377-e4d0-11e6-8253-0242ac110005" success: gave up waiting for pod 'pod-configmaps-c454d377-e4d0-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29052

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc4216f34c0>: {
        s: "expected pod \"pod-45b2ed7d-e4cd-11e6-8253-0242ac110005\" success: gave up waiting for pod 'pod-45b2ed7d-e4cd-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-45b2ed7d-e4cd-11e6-8253-0242ac110005" success: gave up waiting for pod 'pod-45b2ed7d-e4cd-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37500

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc4216cae50>: {
        s: "expected pod \"pod-0f324445-e4dc-11e6-8253-0242ac110005\" success: gave up waiting for pod 'pod-0f324445-e4dc-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-0f324445-e4dc-11e6-8253-0242ac110005" success: gave up waiting for pod 'pod-0f324445-e4dc-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31400

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc421807b00>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 27 13:40:11.714: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:42
Expected error:
    <*errors.errorString | 0xc4216f25d0>: {
        s: "expected pod \"pod-configmaps-2546544f-e4ce-11e6-8253-0242ac110005\" success: gave up waiting for pod 'pod-configmaps-2546544f-e4ce-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-2546544f-e4ce-11e6-8253-0242ac110005" success: gave up waiting for pod 'pod-configmaps-2546544f-e4ce-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34827

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124
Expected error:
    <*errors.errorString | 0xc4203cee50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
Expected error:
    <*errors.errorString | 0xc421b5a740>: {
        s: "expected pod \"pod-secrets-0a3c575d-e4e1-11e6-8253-0242ac110005\" success: gave up waiting for pod 'pod-secrets-0a3c575d-e4e1-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-0a3c575d-e4e1-11e6-8253-0242ac110005" success: gave up waiting for pod 'pod-secrets-0a3c575d-e4e1-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc421cff500>: {
        s: "expected pod \"pod-configmaps-ec9d0827-e4d5-11e6-8253-0242ac110005\" success: gave up waiting for pod 'pod-configmaps-ec9d0827-e4d5-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-ec9d0827-e4d5-11e6-8253-0242ac110005" success: gave up waiting for pod 'pod-configmaps-ec9d0827-e4d5-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #27245

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:203
Expected error:
    <*errors.errorString | 0xc4223497d0>: {
        s: "expected pod \"downwardapi-volume-99bcf559-e4e3-11e6-8253-0242ac110005\" success: gave up waiting for pod 'downwardapi-volume-99bcf559-e4e3-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-99bcf559-e4e3-11e6-8253-0242ac110005" success: gave up waiting for pod 'downwardapi-volume-99bcf559-e4e3-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37531

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:196
Expected error:
    <*errors.errorString | 0xc421edd110>: {
        s: "expected pod \"downwardapi-volume-3cb90724-e4cf-11e6-8253-0242ac110005\" success: gave up waiting for pod 'downwardapi-volume-3cb90724-e4cf-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-3cb90724-e4cf-11e6-8253-0242ac110005" success: gave up waiting for pod 'downwardapi-volume-3cb90724-e4cf-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
Expected error:
    <*errors.errorString | 0xc422157c40>: {
        s: "expected pod \"pod-49ef4345-e4de-11e6-8253-0242ac110005\" success: gave up waiting for pod 'pod-49ef4345-e4de-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-49ef4345-e4de-11e6-8253-0242ac110005" success: gave up waiting for pod 'pod-49ef4345-e4de-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37439

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:40
Expected error:
    <*errors.errorString | 0xc421806bf0>: {
        s: "expected pod \"pod-secrets-da488b6b-e4e2-11e6-8253-0242ac110005\" success: gave up waiting for pod 'pod-secrets-da488b6b-e4e2-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-da488b6b-e4e2-11e6-8253-0242ac110005" success: gave up waiting for pod 'pod-secrets-da488b6b-e4e2-11e6-8253-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35256

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/276/
Multiple broken tests:

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 27 21:09:10.029: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 27 23:37:24.815: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1585

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:220
Jan 27 23:44:39.412: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:184

Issues about this test specifically: #26955

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
Jan 28 00:46:33.971: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/279/
Multiple broken tests:

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 29 00:54:48.081: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Jan 29 03:12:27.418: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c1eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29933 #34111 #38765

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ disks ]
+NAME                                                             ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-0d4f-pvc-459b1216-e60e-11e6-b5fb-42010af00008  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/280/
Multiple broken tests:

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc421b33de0>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:943
Jan 29 09:03:24.127: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 29 09:14:38.280: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:914
Jan 29 09:27:01.290: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
Jan 29 08:09:08.180: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26209 #29227 #32132 #37516

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/281/
Multiple broken tests:

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 17:46:03.467: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42217cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28283

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 18:01:11.949: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220d9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 29 13:11:42.375: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 18:11:27.201: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422575678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 18:07:46.886: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42226d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc4203ac370>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 17:49:11.755: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a67678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 17:10:47.321: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42206a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 18:42:53.337: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422cc1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 17:33:47.260: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422050278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 18:46:07.533: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42226d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 17:14:53.926: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422159678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38439

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 18:49:35.891: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422460c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27079

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Expected error:
    <*errors.errorString | 0xc4203ac370>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b6d630>: {
        s: "4 / 13 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1a526b72-fpkh gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 11:25:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:21:05 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:08:59 -0800 PST  }]\nheapster-v1.2.0-2168613315-bxjlm                                   gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 12:43:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:21:23 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 12:43:00 -0800 PST  }]\nkube-dns-4101612645-gwsks                                          gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:59 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:52 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-1a526b72-fpkh            gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 11:25:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:20:38 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:08:59 -0800 PST  }]\n",
    }
    4 / 13 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1a526b72-fpkh gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 11:25:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:21:05 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:08:59 -0800 PST  }]
    heapster-v1.2.0-2168613315-bxjlm                                   gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 12:43:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:21:23 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 12:43:00 -0800 PST  }]
    kube-dns-4101612645-gwsks                                          gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:59 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:52 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-1a526b72-fpkh            gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 11:25:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:20:38 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:08:59 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 19:08:06.392: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42246e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 17:37:27.683: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422490278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 18:04:28.914: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219e7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 16:54:31.798: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422c98c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420a3a1e0>: {
        s: "4 / 13 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1a526b72-fpkh gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 11:25:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:21:05 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:08:59 -0800 PST  }]\nheapster-v1.2.0-2168613315-bxjlm                                   gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 12:43:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:21:23 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 12:43:00 -0800 PST  }]\nkube-dns-4101612645-gwsks                                          gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:59 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:52 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-1a526b72-fpkh            gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 11:25:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:20:38 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:08:59 -0800 PST  }]\n",
    }
    4 / 13 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-1a526b72-fpkh gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 11:25:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:21:05 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:08:59 -0800 PST  }]
    heapster-v1.2.0-2168613315-bxjlm                                   gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 12:43:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:21:23 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 12:43:00 -0800 PST  }]
    kube-dns-4101612645-gwsks                                          gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:59 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:28:52 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-1a526b72-fpkh            gke-bootstrap-e2e-default-pool-1a526b72-fpkh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 11:25:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-29 13:20:38 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-29 15:08:59 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc4203ac370>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34064

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 17:18:07.361: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42167ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35790

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 17:21:20.729: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224c0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 17:52:32.302: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225e0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 16:48:09.270: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42250f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc4203ac370>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #28337

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:344
Jan 29 15:15:13.240: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:297

Issues about this test specifically: #27673

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 18:15:08.189: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222d6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 29 18:26:33.078: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421723678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:220
Jan 29 14:44:40.680: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:184

Issues about this test specifically: #26955

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

2 participants