Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke: broken test run #38019

Closed
k8s-github-robot opened this issue Dec 3, 2016 · 36 comments
Closed

ci-kubernetes-e2e-gci-gke: broken test run #38019

k8s-github-robot opened this issue Dec 3, 2016 · 36 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/862/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 3, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/894/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/914/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1002/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1080/

Multiple broken tests:

Failed: TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 20 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-bc076df9  n1-standard-2               2016-12-06T23:24:42.006-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-bc076df9-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-bc076df9-973e  us-central1-f  n1-standard-2               10.240.0.2   104.154.219.252  RUNNING
+gke-bootstrap-e2e-default-pool-bc076df9-h1ns  us-central1-f  n1-standard-2               10.240.0.4   146.148.37.45    RUNNING
+gke-bootstrap-e2e-default-pool-bc076df9-mpst  us-central1-f  n1-standard-2               10.240.0.3   130.211.212.75   RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-bc076df9-973e  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-bc076df9-h1ns  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-bc076df9-mpst  us-central1-f  100      pd-standard  READY
+default-route-9f4b95d9d949e2dd                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+default-route-d49e235c6118fa4e                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+gke-bootstrap-e2e-b7b35830-aa5c8c93-bc4e-11e6-bcec-42010af00005  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-bc076df9-mpst  1000
+gke-bootstrap-e2e-b7b35830-aa651cbc-bc4e-11e6-bcec-42010af00005  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-bc076df9-h1ns  1000
+gke-bootstrap-e2e-b7b35830-abc86535-bc4e-11e6-bcec-42010af00005  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-bc076df9-973e  1000
+gke-bootstrap-e2e-b7b35830-all  bootstrap-e2e  10.72.0.0/14      tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-b7b35830-ssh  bootstrap-e2e  104.197.26.30/32  tcp:22                                  gke-bootstrap-e2e-b7b35830-node
+gke-bootstrap-e2e-b7b35830-vms  bootstrap-e2e  10.240.0.0/16     icmp,tcp:1-65535,udp:1-65535            gke-bootstrap-e2e-b7b35830-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 50m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1100/

Multiple broken tests:

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  7 08:15:16.018: Couldn't delete ns: "e2e-tests-limitrange-bm3mc": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-limitrange-bm3mc/persistentvolumeclaims\"") has prevented the request from succeeding (get persistentvolumeclaims) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-limitrange-bm3mc/persistentvolumeclaims\\\"\") has prevented the request from succeeding (get persistentvolumeclaims)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420d180a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #27503

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:107
Expected error:
    <*errors.StatusError | 0xc42022dd80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-deployment-6bwkx/pods/nginx-3794253002-8t80v\\\"\") has prevented the request from succeeding (delete pods nginx-3794253002-8t80v)",
            Reason: "InternalError",
            Details: {
                Name: "nginx-3794253002-8t80v",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-deployment-6bwkx/pods/nginx-3794253002-8t80v\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-deployment-6bwkx/pods/nginx-3794253002-8t80v\"") has prevented the request from succeeding (delete pods nginx-3794253002-8t80v)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1455

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  7 08:15:16.018: Couldn't delete ns: "e2e-tests-svc-latency-2z4mh": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-svc-latency-2z4mh\"") has prevented the request from succeeding (delete namespaces e2e-tests-svc-latency-2z4mh) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-svc-latency-2z4mh\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-svc-latency-2z4mh)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420f42550), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #30632

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.StatusError | 0xc420c13700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-replication-controller-px1sz/pods?fieldSelector=metadata.name%3Dmy-hostname-basic-5353edf3-bc98-11e6-a4f4-0242ac110006-nxc7r\\\"\") has prevented the request from succeeding (get pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-px1sz/pods?fieldSelector=metadata.name%3Dmy-hostname-basic-5353edf3-bc98-11e6-a4f4-0242ac110006-nxc7r\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-px1sz/pods?fieldSelector=metadata.name%3Dmy-hostname-basic-5353edf3-bc98-11e6-a4f4-0242ac110006-nxc7r\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1136/

Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 20 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-8e2ab3c9  n1-standard-2               2016-12-07T23:32:41.431-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-8e2ab3c9-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
+gke-bootstrap-e2e-default-pool-8e2ab3c9-2sps  us-central1-f  n1-standard-2               10.240.0.4   35.184.77.152  RUNNING
+gke-bootstrap-e2e-default-pool-8e2ab3c9-5wvi  us-central1-f  n1-standard-2               10.240.0.2   35.184.1.7     RUNNING
+gke-bootstrap-e2e-default-pool-8e2ab3c9-c6vs  us-central1-f  n1-standard-2               10.240.0.3   35.184.49.45   RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-8e2ab3c9-2sps  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-8e2ab3c9-5wvi  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-8e2ab3c9-c6vs  us-central1-f  100      pd-standard  READY
+default-route-1c4b187b0c91eb21                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+default-route-9a5a7ebe746b38f4                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+gke-bootstrap-e2e-a7239db8-df593a67-bd18-11e6-9e4b-42010af0003a  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-8e2ab3c9-2sps  1000
+gke-bootstrap-e2e-a7239db8-dfb13d54-bd18-11e6-9e4b-42010af0003a  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-8e2ab3c9-5wvi  1000
+gke-bootstrap-e2e-a7239db8-e06d26a5-bd18-11e6-9e4b-42010af0003a  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-8e2ab3c9-c6vs  1000
+gke-bootstrap-e2e-a7239db8-all  bootstrap-e2e  10.72.0.0/14       tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-a7239db8-ssh  bootstrap-e2e  104.154.174.12/32  tcp:22                                  gke-bootstrap-e2e-a7239db8-node
+gke-bootstrap-e2e-a7239db8-vms  bootstrap-e2e  10.240.0.0/16      tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-a7239db8-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #35658

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:68
Expected error:
    <*errors.errorString | 0xc420a931f0>: {
        s: "error waiting for deployment \"test-rolling-update-deployment\" status to match expectation: total pods available: 1, less than the min required: 2",
    }
    error waiting for deployment "test-rolling-update-deployment" status to match expectation: total pods available: 1, less than the min required: 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:337

Issues about this test specifically: #31075 #36286 #38041

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 50m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1158/

Multiple broken tests:

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  8 09:24:21.379: Couldn't delete ns: "e2e-tests-v1job-hlqth": namespace e2e-tests-v1job-hlqth was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-v1job-hlqth was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29657

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
wait for pod "client-containers-66def968-bd6a-11e6-b75f-0242ac110003" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4204305c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #36706

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1089
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.46.213 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-fnqd1] []  <nil> Created e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc\nScaling up e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc4209aec60 exit status 1 <nil> <nil> true [0xc4203c0c10 0xc4203c0c38 0xc4203c0c68] [0xc4203c0c10 0xc4203c0c38 0xc4203c0c68] [0xc4203c0c30 0xc4203c0c48] [0xd19810 0xd19810] 0xc420903260 <nil>}:\nCommand stdout:\nCreated e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc\nScaling up e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.46.213 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-fnqd1] []  <nil> Created e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc
    Scaling up e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc4209aec60 exit status 1 <nil> <nil> true [0xc4203c0c10 0xc4203c0c38 0xc4203c0c68] [0xc4203c0c10 0xc4203c0c38 0xc4203c0c68] [0xc4203c0c30 0xc4203c0c48] [0xd19810 0xd19810] 0xc420903260 <nil>}:
    Command stdout:
    Created e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc
    Scaling up e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-5d7549eff730c4338ff7f2d883a7c6bc up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:170

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:69
Expected error:
    <*errors.errorString | 0xc420f2e370>: {
        s: "expected pod \"pod-5e433cfd-bd6a-11e6-938c-0242ac110003\" success: gave up waiting for pod 'pod-5e433cfd-bd6a-11e6-938c-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-5e433cfd-bd6a-11e6-938c-0242ac110003" success: gave up waiting for pod 'pod-5e433cfd-bd6a-11e6-938c-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2156

Issues about this test specifically: #36183

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc42066cad0>: {
        s: "Only 0 pods started out of 2",
    }
    Only 0 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:348

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:359
Expected error:
    <*errors.errorString | 0xc4203a15f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1650

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
wait for pod "pod-59b0cb34-bd6a-11e6-9eed-0242ac110003" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203ac460>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1204/

Multiple broken tests:

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #35658

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:201
Expected error:
    <*errors.errorString | 0xc420cf0d20>: {
        s: "Failed to execute a successful GET within 30s, Last response body for http://104.154.209.61:31560, host :\ndefault backend - 404\n\ntimed out waiting for the condition\n",
    }
    Failed to execute a successful GET within 30s, Last response body for http://104.154.209.61:31560, host :
    default backend - 404
    
    timed out waiting for the condition
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress_utils.go:862

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 50m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 20 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-d0e55c11  n1-standard-2               2016-12-09T04:18:49.321-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-d0e55c11-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-d0e55c11-2tja  us-central1-f  n1-standard-2               10.240.0.3   104.154.209.61  RUNNING
+gke-bootstrap-e2e-default-pool-d0e55c11-qoyx  us-central1-f  n1-standard-2               10.240.0.2   35.184.71.116   RUNNING
+gke-bootstrap-e2e-default-pool-d0e55c11-s7b2  us-central1-f  n1-standard-2               10.240.0.4   35.184.74.234   RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-d0e55c11-2tja  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-d0e55c11-qoyx  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-d0e55c11-s7b2  us-central1-f  100      pd-standard  READY
+default-route-2e02aa3e1031f835                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+default-route-9a412591a8debffc                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+gke-bootstrap-e2e-fac5945c-faf45857-be09-11e6-896a-42010af0003c  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-d0e55c11-2tja  1000
+gke-bootstrap-e2e-fac5945c-fead9bd9-be09-11e6-896a-42010af0003c  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-d0e55c11-s7b2  1000
+gke-bootstrap-e2e-fac5945c-ff068eea-be09-11e6-896a-42010af0003c  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-d0e55c11-qoyx  1000
+gke-bootstrap-e2e-fac5945c-all  bootstrap-e2e  10.72.0.0/14      udp,icmp,esp,ah,sctp,tcp
+gke-bootstrap-e2e-fac5945c-ssh  bootstrap-e2e  104.197.77.86/32  tcp:22                                  gke-bootstrap-e2e-fac5945c-node
+gke-bootstrap-e2e-fac5945c-vms  bootstrap-e2e  10.240.0.0/16     tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-fac5945c-node

Issues about this test specifically: #33373 #33416 #34060

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1308/

Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:360
Expected error:
    <*errors.errorString | 0xc420ad49e0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:319

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:201
Expected error:
    <*errors.errorString | 0xc420cba0c0>: {
        s: "Failed to execute a successful GET within 30s, Last response body for http://35.184.74.234:32461, host :\n\n\ntimed out waiting for the condition\n",
    }
    Failed to execute a successful GET within 30s, Last response body for http://35.184.74.234:32461, host :
    
    
    timed out waiting for the condition
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress_utils.go:863

Issues about this test specifically: #38556

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:502
Expected error:
    <*errors.errorString | 0xc4203ad090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:266

Issues about this test specifically: #32584

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Expected error:
    <*errors.errorString | 0xc420a33420>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:15, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63617053824, nsec:0, loc:(*time.Location)(0x3a90860)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63617053824, nsec:0, loc:(*time.Location)(0x3a90860)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:15, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63617053824, nsec:0, loc:(*time.Location)(0x3a90860)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63617053824, nsec:0, loc:(*time.Location)(0x3a90860)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1128

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 11 03:55:27.031: Couldn't delete ns: "e2e-tests-kubectl-sg9sf": namespace e2e-tests-kubectl-sg9sf was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-sg9sf was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1312/

Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:393
Expected error:
    <*errors.errorString | 0xc42043dda0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #36970

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:64
wait for pod "client-containers-9ffb1fcf-bfa9-11e6-9371-0242ac11000a" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420344ea0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122

Issues about this test specifically: #29467

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
wait for pod "pod-27fa523a-bfa9-11e6-8c55-0242ac11000a" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203c1170>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122

Issues about this test specifically: #37500

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
wait for pod "pod-host-path-test" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203acbf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
    <*errors.errorString | 0xc4203d3270>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #37056

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:326
Dec 11 06:06:21.577: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1974

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:336
Dec 11 06:03:01.990: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1974

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc42042b640>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420e1c960>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:349

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:401
Expected error:
    <*errors.errorString | 0xc4203d5430>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:237

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:369
Dec 11 05:59:15.003: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:322

Issues about this test specifically: #27673

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1315/

Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420fe0570>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:398

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:312
Dec 11 07:29:58.716: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:254

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420c48a70>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:349

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:502
Expected error:
    <*errors.errorString | 0xc4203fd710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:266

Issues about this test specifically: #32584

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:401
Expected error:
    <*errors.errorString | 0xc42031f6e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:237

Issues about this test specifically: #26168 #27450

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1358/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203ac250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 12 03:01:16.031: Couldn't delete ns: "e2e-tests-kubectl-2483t": namespace e2e-tests-kubectl-2483t was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-2483t was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42042a6f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32375

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:189
Expected error:
    <*errors.errorString | 0xc4203ad4e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:102

Issues about this test specifically: #36564

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 12 03:03:55.478: Couldn't delete ns: "e2e-tests-disruption-pbkq7": namespace e2e-tests-disruption-pbkq7 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-pbkq7 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #32639

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc420451ae0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:493
Expected error:
    <*errors.errorString | 0xc4203d2a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3805

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc420867bc0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63617136975, nsec:0, loc:(*time.Location)(0x3a90860)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63617136975, nsec:0, loc:(*time.Location)(0x3a90860)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1beta1.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63617137036, nsec:0, loc:(*time.Location)(0x3a90860)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63617137036, nsec:0, loc:(*time.Location)(0x3a90860)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3955596946\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63617136975, nsec:0, loc:(*time.Location)(0x3a90860)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63617136975, nsec:0, loc:(*time.Location)(0x3a90860)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1beta1.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63617137036, nsec:0, loc:(*time.Location)(0x3a90860)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63617137036, nsec:0, loc:(*time.Location)(0x3a90860)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3955596946\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1344

Issues about this test specifically: #31697 #36574

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc42077a040>: {
        s: "gave up waiting for pod 'wget-test' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'wget-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:379
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:378

Issues about this test specifically: #26324 #27715 #28845

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1431/

Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:273
Expected error:
    <*errors.errorString | 0xc42070a2f0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:149

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc4205547b0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:398

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc4206d6820>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:349

Issues about this test specifically: #27196 #28998 #32403 #33341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1611/

Multiple broken tests:

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:262
failed to GET scheduled pod
Expected error:
    <*url.Error | 0xc42068ced0>: {
        Op: "Get",
        URL: "https://35.184.46.115/api/v1/namespaces/e2e-tests-pods-g48z4/pods/pod-submit-remove-903d7018-c449-11e6-bddb-0242ac11000b",
        Err: {
            Op: "read",
            Net: "tcp",
            Source: {IP: [172, 17, 0, 11], Port: 51281, Zone: ""},
            Addr: {IP: "#\xb8.s", Port: 443, Zone: ""},
            Err: {Syscall: "read", Err: 0x68},
        },
    }
    Get https://35.184.46.115/api/v1/namespaces/e2e-tests-pods-g48z4/pods/pod-submit-remove-903d7018-c449-11e6-bddb-0242ac11000b: read tcp 172.17.0.11:51281->35.184.46.115:443: read: connection reset by peer
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:206

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:348
Expected error:
    <*errors.errorString | 0xc420414630>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:273
0 (0; 34.419983ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/https:proxy-service-tj1jc:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 35.067148ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/https:proxy-service-tj1jc:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 45.164054ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 46.423492ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 50.979845ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 52.713598ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/https:proxy-service-tj1jc-rzc1h:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 53.96682ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 60.530027ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 82.190976ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 93.136205ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/https:proxy-service-tj1jc:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:460/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 22.33713ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 24.304206ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 34.848456ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 36.775805ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 39.280696ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 19.521566ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/https:proxy-service-tj1jc-rzc1h:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:462/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 22.18893ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 29.473997ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 30.726747ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 35.321494ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
3 (0; 32.315745ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
3 (0; 35.434913ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/https:proxy-service-tj1jc:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:462/' }],RetryAfterSeconds:0,} Code:503}
3 (0; 36.035583ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
3 (0; 38.836808ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
3 (0; 40.712753ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
3 (0; 41.740778ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
4 (0; 27.196183ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
4 (0; 36.726635ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
4 (0; 38.98603ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:1080/' }],RetryAfterSeconds:0,} Code:503}
4 (0; 39.636449ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
4 (0; 41.870458ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
4 (0; 42.360132ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
5 (0; 26.306169ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:1080/' }],RetryAfterSeconds:0,} Code:503}
5 (0; 26.811908ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
5 (0; 32.556876ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
5 (0; 49.21786ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
6 (0; 9.550638ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/https:proxy-service-tj1jc:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:462/' }],RetryAfterSeconds:0,} Code:503}
6 (0; 24.066033ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
6 (0; 30.990446ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
6 (0; 33.160758ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:1080/' }],RetryAfterSeconds:0,} Code:503}
6 (0; 45.39441ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
6 (0; 47.465301ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
7 (0; 19.578991ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
7 (0; 21.805734ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
7 (0; 41.449118ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
7 (0; 44.414674ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
7 (0; 47.544022ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/https:proxy-service-tj1jc-rzc1h:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:460/' }],RetryAfterSeconds:0,} Code:503}
7 (0; 49.699491ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
7 (0; 60.579099ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
8 (0; 44.78861ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:1080/' }],RetryAfterSeconds:0,} Code:503}
8 (0; 46.098526ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/https:proxy-service-tj1jc-rzc1h:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:460/' }],RetryAfterSeconds:0,} Code:503}
8 (0; 49.361183ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
8 (0; 57.902219ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
8 (0; 58.198771ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 103.288266ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 119.585234ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 135.326187ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 169.551531ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 169.647918ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 169.967308ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 169.42021ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/https:proxy-service-tj1jc:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:462/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 169.473274ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 169.540263ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 172.959069ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
9 (0; 177.504465ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
10 (0; 30.778659ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
10 (0; 38.885457ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
10 (0; 41.686426ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
11 (0; 20.285424ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
11 (0; 26.260959ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
11 (0; 26.507055ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/https:proxy-service-tj1jc:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'https://10.72.2.40:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'https://10.72.2.40:462/' }],RetryAfterSeconds:0,} Code:503}
11 (0; 54.325291ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
11 (0; 56.19186ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
11 (0; 58.654236ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:1080/' }],RetryAfterSeconds:0,} Code:503}
11 (0; 59.235077ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
12 (0; 22.514139ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
12 (0; 25.993247ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:1080/' }],RetryAfterSeconds:0,} Code:503}
12 (0; 38.107365ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
12 (0; 41.090892ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
12 (0; 47.555669ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
13 (0; 20.97073ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
13 (0; 25.840587ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
13 (0; 26.692415ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
13 (0; 29.157953ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
13 (0; 32.884568ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:1080/' }],RetryAfterSeconds:0,} Code:503}
13 (0; 34.250239ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
14 (0; 15.869099ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/proxy-service-tj1jc:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
14 (0; 17.02589ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
14 (0; 21.404585ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/http:proxy-service-tj1jc-rzc1h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
14 (0; 37.11814ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
14 (0; 39.035322ms): path /api/v1/namespaces/e2e-tests-proxy-s0kgl/pods/proxy-service-tj1jc-rzc1h:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:1080/' }],RetryAfterSeconds:0,} Code:503}
14 (0; 40.627331ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:162/' }],RetryAfterSeconds:0,} Code:503}
14 (0; 41.48421ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-s0kgl/services/http:proxy-service-tj1jc:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'\nTrying to reach: 'http://10.72.2.40:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'read tcp 10.240.0.57:41710->104.198.45.113:22: read: connection reset by peer'
Trying to reach: 'http://10.72.2.40:160/' }],RetryAfterSeconds:0,} Code:503}
15 (0; 15.8

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1847/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420deaaf0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:349

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203ac050>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:368
Expected error:
    <*errors.errorString | 0xc420b9e050>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1579

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc4215b4200>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:349

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
wait for pod "pod-bc68d21f-c7d6-11e6-8bba-0242ac110009" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203d3600>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420a6a360>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:20, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63617960839, nsec:0, loc:(*time.Location)(0x3807580)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63617960839, nsec:0, loc:(*time.Location)(0x3807580)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:20, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63617960839, nsec:0, loc:(*time.Location)(0x3807580)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63617960839, nsec:0, loc:(*time.Location)(0x3807580)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1055

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:235
starting pod liveness-http in namespace e2e-tests-container-probe-fvkr6
Expected error:
    <*errors.errorString | 0xc4203d3080>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:365

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc4203d2b50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc42043d9b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 21 15:44:04.392: Couldn't delete ns: "e2e-tests-kubectl-bdbw3": namespace e2e-tests-kubectl-bdbw3 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-bdbw3 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc420443250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #35283 #36867

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:945
Dec 21 15:43:11.686: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:295

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 21 15:45:39.972: Couldn't delete ns: "e2e-tests-disruption-sd3cx": namespace e2e-tests-disruption-sd3cx was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-disruption-sd3cx was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:502
Expected error:
    <*errors.errorString | 0xc4203fd3f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:266

Issues about this test specifically: #32584

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:34
Expected error:
    <*errors.errorString | 0xc420ff4b00>: {
        s: "expected pod \"client-containers-d0f205b7-c7d7-11e6-b437-0242ac110009\" success: gave up waiting for pod 'client-containers-d0f205b7-c7d7-11e6-b437-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-d0f205b7-c7d7-11e6-b437-0242ac110009" success: gave up waiting for pod 'client-containers-d0f205b7-c7d7-11e6-b437-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2144

Issues about this test specifically: #34520

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:826
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:825

Issues about this test specifically: #28493 #29964

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1877/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc421472900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-lbv2w--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-lbv2w--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-lbv2w--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc420a2a580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-22k9t--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-22k9t--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-22k9t--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc4201bad00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-w9m0k--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-w9m0k--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-w9m0k--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1878/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc420687400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-l05w8--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-l05w8--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-l05w8--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc421358380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-7vs1v--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-7vs1v--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-7vs1v--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420f29480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-0k574--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-0k574--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-0k574--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1879/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420323200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-q2lp9--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-q2lp9--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-q2lp9--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc420e2e200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-rbbhp--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-rbbhp--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-rbbhp--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc420174e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-vpvr9--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-vpvr9--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-vpvr9--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1880/
Multiple broken tests:

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420d42680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-svq3f--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-svq3f--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-svq3f--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc421239900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-dntlt--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-dntlt--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-dntlt--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc4211e6600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-15x39--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-15x39--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-15x39--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1881/
Multiple broken tests:

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420c56400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-nf3q4--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-nf3q4--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-nf3q4--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc420941380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-hcwgn--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-hcwgn--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-hcwgn--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc420372480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-0tntb--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-0tntb--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-0tntb--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1882/
Multiple broken tests:

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc421082800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-0n6wn--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-0n6wn--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-0n6wn--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc4201fd000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-m6gmn--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-m6gmn--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-m6gmn--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc4202fd300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-hffx2--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-hffx2--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-hffx2--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1883/
Multiple broken tests:

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc420f37480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-9xkz5--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-9xkz5--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-9xkz5--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420dd0900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-vcrb3--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-vcrb3--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-vcrb3--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc420473500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-lz0sj--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-lz0sj--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-lz0sj--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1884/
Multiple broken tests:

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc4201fd600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420af5d80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc421266c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1885/
Multiple broken tests:

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc42042db80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc42024cf00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420421900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1886/
Multiple broken tests:

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc420b12200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc42015fd80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc420321a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1887/
Multiple broken tests:

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420ad7a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc420c5b380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc420dd9e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1888/
Multiple broken tests:

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc42086ea80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc4208d3600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc42042c480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1889/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc42028cd00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc420317f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc4211de980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/1890/
Multiple broken tests:

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc42016dc80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc4202cc580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc42102fe80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/2086/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:368
Dec 26 07:34:29.474: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1624

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Dec 26 07:40:09.068: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:438
Expected error:
    <*errors.errorString | 0xc4203d13b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:220

Issues about this test specifically: #28337

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1106
Dec 26 07:33:55.130: expected un-ready endpoint for Service webserver within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1104

Issues about this test specifically: #26172

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc4207aa810>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-26 07:26:16 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-26 07:26:47 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-26 07:26:16 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.72.0.152 StartTime:2016-12-26 07:26:16 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2016-12-26 07:26:16 -0800 PST,FinishedAt:2016-12-26 07:26:46 -0800 PST,ContainerID:docker://9b21b10e519829350eaa35e30ea6066751df8bd6d54da961b7f0699c1b126513,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://9b21b10e519829350eaa35e30ea6066751df8bd6d54da961b7f0699c1b126513}] QOSClass:}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-26 07:26:16 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-26 07:26:47 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-26 07:26:16 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.72.0.152 StartTime:2016-12-26 07:26:16 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2016-12-26 07:26:16 -0800 PST,FinishedAt:2016-12-26 07:26:46 -0800 PST,ContainerID:docker://9b21b10e519829350eaa35e30ea6066751df8bd6d54da961b7f0699c1b126513,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://9b21b10e519829350eaa35e30ea6066751df8bd6d54da961b7f0699c1b126513}] QOSClass:}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc421277a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.72.0.149:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.72.0.149:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.72.0.149:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:214

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:673
Dec 26 07:19:47.064: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:670

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:125
Timed out after 120.001s.
Expected
    <string>: content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    
to contain substring
    <string>: key3="value3"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:353
Expected error:
    <*errors.errorString | 0xc4203e4b70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:220

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:401
Expected error:
    <*errors.errorString | 0xc420431a00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:220

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:502
Expected error:
    <*errors.errorString | 0xc42044fb50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:220

Issues about this test specifically: #32584

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/2343/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc420498580>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:868
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:867

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc4211604e0>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63618783608, nsec:0, loc:(*time.Location)(0x3838a60)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63618783608, nsec:0, loc:(*time.Location)(0x3838a60)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63618783608, nsec:0, loc:(*time.Location)(0x3838a60)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63618783608, nsec:0, loc:(*time.Location)(0x3838a60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:374

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:66
Expected error:
    <*errors.errorString | 0xc4203815c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:283

Issues about this test specifically: #37144

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc42037f5e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:177
starting pod liveness-http in namespace e2e-tests-container-probe-tf0hk
Expected error:
    <*errors.errorString | 0xc4203fc180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:365

Issues about this test specifically: #38511

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc42042fab0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc420451e10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:336
Dec 31 04:25:27.146: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1962

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420c9e8d0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:398

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
wait for pod "client-containers-3b30c422-cf53-11e6-bae4-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203d3430>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122

Issues about this test specifically: #29994

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/2662/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4204275f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
wait for pod "pod-secrets-9fb53469-d413-11e6-b414-0242ac11000b" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203bd400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan  6 05:31:07.397: Couldn't delete ns: "e2e-tests-kubectl-7gp2w": namespace e2e-tests-kubectl-7gp2w was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-7gp2w was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:241
wait for pod "pod-service-account-ea154830-d413-11e6-bc06-0242ac11000b-xrf2n" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203d3250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122

Issues about this test specifically: #37526

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:407
Expected
    <*errors.errorString | 0xc4203bf1b0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:395

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420f148b0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:398

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan  6 05:34:07.688: Couldn't delete ns: "e2e-tests-disruption-s6479": namespace e2e-tests-disruption-s6479 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-disruption-s6479 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
    <*errors.errorString | 0xc420414e60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #37056

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:455
Jan  6 05:27:24.202: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:544

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc420ef0030>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:479

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/2845/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
Expected error:
    <*errors.errorString | 0xc420750b60>: {
        s: "expected pod \"pod-eebce195-d6d9-11e6-8f5f-0242ac110006\" success: gave up waiting for pod 'pod-eebce195-d6d9-11e6-8f5f-0242ac110006' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-eebce195-d6d9-11e6-8f5f-0242ac110006" success: gave up waiting for pod 'pod-eebce195-d6d9-11e6-8f5f-0242ac110006' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2144

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc420ea4010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:889

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:455
Jan  9 18:12:10.952: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:544

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421174010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1003

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:75
Expected error:
    <*errors.errorString | 0xc420fca350>: {
        s: "want pod 'test-webserver-e8ddd885-d6d9-11e6-9db3-0242ac110006' on 'gke-bootstrap-e2e-default-pool-067aac99-tcwc' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-e8ddd885-d6d9-11e6-9db3-0242ac110006' on 'gke-bootstrap-e2e-default-pool-067aac99-tcwc' to be 'Running' but was 'Pending'
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:57

Issues about this test specifically: #29521

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:788
Jan  9 18:16:16.880: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:298

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc421289960>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:23, Replicas:7, UpdatedReplicas:5, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:2, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63619611056, nsec:0, loc:(*time.Location)(0x38e1260)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63619611056, nsec:0, loc:(*time.Location)(0x38e1260)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, v1beta1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63619611056, nsec:0, loc:(*time.Location)(0x38e1260)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63619611017, nsec:0, loc:(*time.Location)(0x38e1260)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"nginx-3359961522\\\" is progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:23, Replicas:7, UpdatedReplicas:5, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:2, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63619611056, nsec:0, loc:(*time.Location)(0x38e1260)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63619611056, nsec:0, loc:(*time.Location)(0x38e1260)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1beta1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63619611056, nsec:0, loc:(*time.Location)(0x38e1260)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63619611017, nsec:0, loc:(*time.Location)(0x38e1260)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"nginx-3359961522\" is progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1402

Issues about this test specifically: #36265 #36353 #36628

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/3207/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc42035b420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends no data, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:376
Jan 17 04:23:11.331: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:265

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420f4ace0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:365

Issues about this test specifically: #27196 #28998 #32403 #33341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/3621/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203c5570>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:94
Expected error:
    <*errors.errorString | 0xc4209bdf40>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:20, UpdatedReplicas:20, ReadyReplicas:19, AvailableReplicas:19, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63621012053, nsec:0, loc:(*time.Location)(0x3ae90e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63621012053, nsec:0, loc:(*time.Location)(0x3ae90e0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:20, UpdatedReplicas:20, ReadyReplicas:19, AvailableReplicas:19, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63621012053, nsec:0, loc:(*time.Location)(0x3ae90e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63621012053, nsec:0, loc:(*time.Location)(0x3ae90e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1055

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc420367b20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55
Expected error:
    <*errors.errorString | 0xc420341130>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #37502

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420c28aa0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:365

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 that expects no client request should support a client that connects, sends data, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:386
Jan 25 23:21:13.206: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:210

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc420375be0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420340420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203b6d10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/3651/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:426
Jan 26 14:42:09.077: Unexpected kubectl exec output. Wanted "running in container", got ""
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:392

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:262
Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:251

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:192
Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:180

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

2 participants