Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gce: broken test run #37062

Closed
k8s-github-robot opened this issue Nov 18, 2016 · 27 comments
Closed

ci-kubernetes-e2e-gci-gce: broken test run #37062

k8s-github-robot opened this issue Nov 18, 2016 · 27 comments
Labels
area/test kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/65/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Nov 18, 2016
@fejta
Copy link
Contributor

fejta commented Nov 18, 2016

Need more timeout samples to figure out what is going on.

@fejta
Copy link
Contributor

fejta commented Nov 18, 2016

@fejta fejta unassigned ixdy Nov 18, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/157/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/316/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/362/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/367/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/406/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/413/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/464/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/571/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/613/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/735/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/783/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/818/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/881/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/942/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/1061/

Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 30 leaked resources
+NAME                           MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+bootstrap-e2e-minion-template  n1-standard-2               2016-12-07T17:52:18.528-08:00
+NAME                        LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+bootstrap-e2e-minion-group  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                             ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+bootstrap-e2e-master             us-central1-f  n1-standard-1               10.240.0.2   130.211.223.213  RUNNING
+bootstrap-e2e-minion-group-59mz  us-central1-f  n1-standard-2               10.240.0.3   130.211.235.22   RUNNING
+bootstrap-e2e-minion-group-6hag  us-central1-f  n1-standard-2               10.240.0.4   104.155.185.186  RUNNING
+bootstrap-e2e-minion-group-lttg  us-central1-f  n1-standard-2               10.240.0.5   130.211.209.145  RUNNING
+bootstrap-e2e-master                                            us-central1-f  20       pd-standard  READY
+bootstrap-e2e-master-pd                                         us-central1-f  20       pd-ssd       READY
+bootstrap-e2e-minion-group-59mz                                 us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-6hag                                 us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-lttg                                 us-central1-f  100      pd-standard  READY
+NAME                     REGION       ADDRESS          STATUS
+bootstrap-e2e-master-ip  us-central1  130.211.223.213  IN_USE
+bootstrap-e2e-43a1d400-bce9-11e6-9ce9-42010af00002  bootstrap-e2e  10.180.0.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-59mz  1000
+bootstrap-e2e-449b612d-bce9-11e6-9ce9-42010af00002  bootstrap-e2e  10.180.1.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-6hag  1000
+bootstrap-e2e-453844f3-bce9-11e6-9ce9-42010af00002  bootstrap-e2e  10.180.2.0/24  us-central1-f/instances/bootstrap-e2e-master             1000
+bootstrap-e2e-46646fd9-bce9-11e6-9ce9-42010af00002  bootstrap-e2e  10.180.3.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-lttg  1000
+default-route-146269a9dc506162                      bootstrap-e2e  10.240.0.0/16                                                           1000
+default-route-60b972aa9b0fd03a                      bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                 1000
+bootstrap-e2e-default-internal-master         bootstrap-e2e  10.0.0.0/8     tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp                        bootstrap-e2e-master
+bootstrap-e2e-default-internal-node           bootstrap-e2e  10.0.0.0/8     tcp:1-65535,udp:1-65535,icmp                                      bootstrap-e2e-minion
+bootstrap-e2e-default-ssh                     bootstrap-e2e  0.0.0.0/0      tcp:22
+bootstrap-e2e-master-etcd                     bootstrap-e2e                 tcp:2380,tcp:2381                           bootstrap-e2e-master  bootstrap-e2e-master
+bootstrap-e2e-master-https                    bootstrap-e2e  0.0.0.0/0      tcp:443                                                           bootstrap-e2e-master
+bootstrap-e2e-minion-all                      bootstrap-e2e  10.180.0.0/14  tcp,udp,icmp,esp,ah,sctp                                          bootstrap-e2e-minion
+bootstrap-e2e-minion-bootstrap-e2e-http-alt   bootstrap-e2e  0.0.0.0/0      tcp:80,tcp:8080                                                   bootstrap-e2e-minion
+bootstrap-e2e-minion-bootstrap-e2e-nodeports  bootstrap-e2e  0.0.0.0/0      tcp:30000-32767,udp:30000-32767                                   bootstrap-e2e-minion

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 50m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/1087/

Multiple broken tests:

Failed: TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 30 leaked resources
+NAME                           MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+bootstrap-e2e-minion-template  n1-standard-2               2016-12-08T06:57:55.247-08:00
+NAME                        LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+bootstrap-e2e-minion-group  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                             ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+bootstrap-e2e-master             us-central1-f  n1-standard-1               10.240.0.2   104.197.124.197  RUNNING
+bootstrap-e2e-minion-group-4h8x  us-central1-f  n1-standard-2               10.240.0.5   104.197.109.16   RUNNING
+bootstrap-e2e-minion-group-6bg7  us-central1-f  n1-standard-2               10.240.0.4   104.154.141.64   RUNNING
+bootstrap-e2e-minion-group-phso  us-central1-f  n1-standard-2               10.240.0.3   35.184.21.145    RUNNING
+bootstrap-e2e-master                                            us-central1-f  20       pd-standard  READY
+bootstrap-e2e-master-pd                                         us-central1-f  20       pd-ssd       READY
+bootstrap-e2e-minion-group-4h8x                                 us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-6bg7                                 us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-phso                                 us-central1-f  100      pd-standard  READY
+NAME                     REGION       ADDRESS          STATUS
+bootstrap-e2e-master-ip  us-central1  104.197.124.197  IN_USE
+bootstrap-e2e-f9fd1deb-bd56-11e6-8647-42010af00002  bootstrap-e2e  10.180.0.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-phso  1000
+bootstrap-e2e-fa08ad60-bd56-11e6-8647-42010af00002  bootstrap-e2e  10.180.1.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-4h8x  1000
+bootstrap-e2e-fb0669fd-bd56-11e6-8647-42010af00002  bootstrap-e2e  10.180.2.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-6bg7  1000
+bootstrap-e2e-fc8a8f71-bd56-11e6-8647-42010af00002  bootstrap-e2e  10.180.3.0/24  us-central1-f/instances/bootstrap-e2e-master             1000
+default-route-67d976ee036cb265                      bootstrap-e2e  10.240.0.0/16                                                           1000
+default-route-d8677a8b1ded1df9                      bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                 1000
+bootstrap-e2e-default-internal-master         bootstrap-e2e  10.0.0.0/8     tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp                        bootstrap-e2e-master
+bootstrap-e2e-default-internal-node           bootstrap-e2e  10.0.0.0/8     tcp:1-65535,udp:1-65535,icmp                                      bootstrap-e2e-minion
+bootstrap-e2e-default-ssh                     bootstrap-e2e  0.0.0.0/0      tcp:22
+bootstrap-e2e-master-etcd                     bootstrap-e2e                 tcp:2380,tcp:2381                           bootstrap-e2e-master  bootstrap-e2e-master
+bootstrap-e2e-master-https                    bootstrap-e2e  0.0.0.0/0      tcp:443                                                           bootstrap-e2e-master
+bootstrap-e2e-minion-all                      bootstrap-e2e  10.180.0.0/14  tcp,udp,icmp,esp,ah,sctp                                          bootstrap-e2e-minion
+bootstrap-e2e-minion-bootstrap-e2e-http-alt   bootstrap-e2e  0.0.0.0/0      tcp:80,tcp:8080                                                   bootstrap-e2e-minion
+bootstrap-e2e-minion-bootstrap-e2e-nodeports  bootstrap-e2e  0.0.0.0/0      tcp:30000-32767,udp:30000-32767                                   bootstrap-e2e-minion

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 50m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/1284/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/1292/

Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Failed to delete server pod: dial tcp 35.184.34.17:443: getsockopt: connection refused
Expected error:
    <*net.OpError | 0xc420f30730>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 34, 17],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.34.17:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:187

Issues about this test specifically: #37056

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ disks ]
+bootstrap-e2e-dynamic-pvc-379aa740-c0e5-11e6-8f40-42010af00002  us-central1-f  1        pd-standard  READY
+bootstrap-e2e-dynamic-pvc-38a13d11-c0e5-11e6-8f40-42010af00002  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:463
Expected error:
    <*url.Error | 0xc420f67b90>: {
        Op: "Get",
        URL: "https://35.184.34.17/api/v1/namespaces/e2e-tests-resourcequota-mcznk/resourcequotas/quota-not-terminating",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 34, 17],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.34.17/api/v1/namespaces/e2e-tests-resourcequota-mcznk/resourcequotas/quota-not-terminating: dial tcp 35.184.34.17:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:401

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:81
Expected error:
    <*url.Error | 0xc420f70d80>: {
        Op: "Get",
        URL: "https://35.184.34.17/apis/batch/v1/namespaces/e2e-tests-job-h2qv1/jobs/fail-once-local",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 34, 17],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.34.17/apis/batch/v1/namespaces/e2e-tests-job-h2qv1/jobs/fail-once-local: dial tcp 35.184.34.17:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:80

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Failed to clear the second deployment's overlapping annotation
Expected error:
    <*url.Error | 0xc4210ea900>: {
        Op: "Get",
        URL: "https://35.184.34.17/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-v9vfw/deployments/second-deployment",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 34, 17],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.34.17/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-v9vfw/deployments/second-deployment: dial tcp 35.184.34.17:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1251

Issues about this test specifically: #31502 #32947 #38646

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Expected error:
    <*errors.errorString | 0xc420462800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:356
Failed after 1.014s.
Expected success, but got an error:
    <*url.Error | 0xc421067260>: {
        Op: "Get",
        URL: "https://35.184.34.17/api/v1/nodes/bootstrap-e2e-minion-group-eji8",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 34, 17],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.34.17/api/v1/nodes/bootstrap-e2e-minion-group-eji8: dial tcp 35.184.34.17:443: getsockopt: connection refused
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:354

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy stateful pod restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:177
Expected error:
    <*url.Error | 0xc420eecb40>: {
        Op: "Get",
        URL: "https://35.184.34.17/api/v1/namespaces/e2e-tests-statefulset-fthh5/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 34, 17],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.34.17/api/v1/namespaces/e2e-tests-statefulset-fthh5/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar: dial tcp 35.184.34.17:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:884

Issues about this test specifically: #38573

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:495
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.34.17 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-dqz5j run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never success -- /bin/sh -c exit 0] []  <nil>  Unable to connect to the server: unexpected EOF\n [] <nil> 0xc420d78d50 exit status 1 <nil> <nil> true [0xc420518638 0xc420518658 0xc420518678] [0xc420518638 0xc420518658 0xc420518678] [0xc420518650 0xc420518670] [0xc36ed0 0xc36ed0] 0xc420d4bce0 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: unexpected EOF\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.34.17 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-dqz5j run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never success -- /bin/sh -c exit 0] []  <nil>  Unable to connect to the server: unexpected EOF
     [] <nil> 0xc420d78d50 exit status 1 <nil> <nil> true [0xc420518638 0xc420518658 0xc420518678] [0xc420518638 0xc420518658 0xc420518678] [0xc420518650 0xc420518670] [0xc36ed0 0xc36ed0] 0xc420d4bce0 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: unexpected EOF
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:469

Issues about this test specifically: #31151 #35586

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:348
Expected error:
    <*url.Error | 0xc4209dd2f0>: {
        Op: "Get",
        URL: "https://35.184.34.17/api/v1/namespaces/e2e-tests-statefulset-kjg6k/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 34, 17],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.34.17/api/v1/namespaces/e2e-tests-statefulset-kjg6k/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar: dial tcp 35.184.34.17:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:884

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/2235/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:142
Jan  2 17:04:43.982: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:893

Issues about this test specifically: #37361 #37919

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 32 leaked resources
[ instance-templates ]
+NAME                           MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+bootstrap-e2e-minion-template  n1-standard-2               2017-01-02T16:40:46.355-08:00
[ instance-groups ]
+NAME                        LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+bootstrap-e2e-minion-group  us-central1-f  zone   bootstrap-e2e  Yes      0
[ instances ]
+NAME                             ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+bootstrap-e2e-master             us-central1-f  n1-standard-1               10.240.0.2   104.198.37.68   RUNNING
+bootstrap-e2e-minion-group-gtvv  us-central1-f  n1-standard-2               10.240.0.5   35.184.63.178   STOPPING
+bootstrap-e2e-minion-group-tj05  us-central1-f  n1-standard-2               10.240.0.4   35.184.31.149   STOPPING
+bootstrap-e2e-minion-group-vcnk  us-central1-f  n1-standard-2               10.240.0.3   104.154.128.63  STOPPING
+bootstrap-e2e-minion-group-xlpt  us-central1-f  n1-standard-2               10.240.0.6   35.184.31.74    STOPPING
[ disks ]
+bootstrap-e2e-dynamic-pvc-134f0973-d14f-11e6-9654-42010af00002  us-central1-f  1        pd-standard  READY
[ disks ]
+bootstrap-e2e-master                                            us-central1-f  20       pd-standard  READY
+bootstrap-e2e-master-pd                                         us-central1-f  20       pd-ssd       READY
+bootstrap-e2e-minion-group-gtvv                                 us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-tj05                                 us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-vcnk                                 us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-xlpt                                 us-central1-f  100      pd-standard  READY
[ addresses ]
+NAME                     REGION       ADDRESS        STATUS
+bootstrap-e2e-master-ip  us-central1  104.198.37.68  IN_USE
[ routes ]
+bootstrap-e2e-8290520a-d14d-11e6-9654-42010af00002  bootstrap-e2e  10.180.0.0/24  us-central1-f/instances/bootstrap-e2e-master             1000
+bootstrap-e2e-8b958917-d14d-11e6-9654-42010af00002  bootstrap-e2e  10.180.1.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-xlpt  1000
+bootstrap-e2e-8ba6d223-d14d-11e6-9654-42010af00002  bootstrap-e2e  10.180.2.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-gtvv  1000
+bootstrap-e2e-8c1dd8fa-d14d-11e6-9654-42010af00002  bootstrap-e2e  10.180.3.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-vcnk  1000
+bootstrap-e2e-8c9eeaa9-d14d-11e6-9654-42010af00002  bootstrap-e2e  10.180.4.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-tj05  1000
[ routes ]
+default-route-1f3e70ebc5af4cfd                      bootstrap-e2e  10.240.0.0/16                                                           1000
[ routes ]
+default-route-68badf9f3ec96347                      bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                 1000
[ firewall-rules ]
+bootstrap-e2e-default-internal-master  bootstrap-e2e  10.0.0.0/8     tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp                        bootstrap-e2e-master
+bootstrap-e2e-default-internal-node    bootstrap-e2e  10.0.0.0/8     tcp:1-65535,udp:1-65535,icmp                                      bootstrap-e2e-minion
+bootstrap-e2e-default-ssh              bootstrap-e2e  0.0.0.0/0      tcp:22
+bootstrap-e2e-master-etcd              bootstrap-e2e                 tcp:2380,tcp:2381                           bootstrap-e2e-master  bootstrap-e2e-master
+bootstrap-e2e-master-https             bootstrap-e2e  0.0.0.0/0      tcp:443                                                           bootstrap-e2e-master
+bootstrap-e2e-minion-all               bootstrap-e2e  10.180.0.0/14  tcp,udp,icmp,esp,ah,sctp                                          bootstrap-e2e-minion

Issues about this test specifically: #33373 #33416 #34060

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/2497/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 11 leaked resources
[ disks ]
+bootstrap-e2e-dynamic-pvc-5ea8454d-d5e0-11e6-8116-42010af00002  us-central1-f  1        pd-standard  READY
[ routes ]
+bootstrap-e2e-f4c30f3b-d5de-11e6-8116-42010af00002  bootstrap-e2e  10.180.0.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-kznl  1000
+bootstrap-e2e-f5a16b1d-d5de-11e6-8116-42010af00002  bootstrap-e2e  10.180.1.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-6b7x  1000
+bootstrap-e2e-f68a5342-d5de-11e6-8116-42010af00002  bootstrap-e2e  10.180.2.0/24  us-central1-f/instances/bootstrap-e2e-master             1000
+bootstrap-e2e-f752ed76-d5de-11e6-8116-42010af00002  bootstrap-e2e  10.180.3.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-9d00  1000
+bootstrap-e2e-f7834f89-d5de-11e6-8116-42010af00002  bootstrap-e2e  10.180.4.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-hkh8  1000
[ routes ]
+default-route-67562df26ddc036b                      bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                 1000
[ routes ]
+default-route-bd15e436884699e7                      bootstrap-e2e  10.240.0.0/16                                                           1000
[ firewall-rules ]
+bootstrap-e2e-default-internal-master  bootstrap-e2e  10.0.0.0/8    tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp            bootstrap-e2e-master
+bootstrap-e2e-default-internal-node    bootstrap-e2e  10.0.0.0/8    tcp:1-65535,udp:1-65535,icmp                          bootstrap-e2e-minion
+bootstrap-e2e-default-ssh              bootstrap-e2e  0.0.0.0/0     tcp:22

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy stateful pod restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:177
Jan  8 12:34:21.778: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:893

Issues about this test specifically: #38573

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc4208745d0>: {
        s: "deployment \"nginx\" never updated with the desired condition and reason: [{Available False 2017-01-08 12:18:33 -0800 PST 2017-01-08 12:18:33 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-01-08 12:18:34 -0800 PST 2017-01-08 12:18:33 -0800 PST ReplicaSetUpdated ReplicaSet \"nginx-3039220502\" is progressing.}]",
    }
    deployment "nginx" never updated with the desired condition and reason: [{Available False 2017-01-08 12:18:33 -0800 PST 2017-01-08 12:18:33 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-01-08 12:18:34 -0800 PST 2017-01-08 12:18:33 -0800 PST ReplicaSetUpdated ReplicaSet "nginx-3039220502" is progressing.}]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1260

Issues about this test specifically: #31697 #36574

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/2737/
Multiple broken tests:

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:537
Expected error:
    <*errors.errorString | 0xc4203eaa40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:506

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:212
Jan 13 22:26:58.165: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:893

Issues about this test specifically: #38439

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ disks ]
+bootstrap-e2e-dynamic-pvc-0dfa5fad-da21-11e6-9bb7-42010af00002  us-central1-f  1        pd-standard  READY
+bootstrap-e2e-dynamic-pvc-0e424de2-da21-11e6-9bb7-42010af00002  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/2986/
Multiple broken tests:

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:154
Expected error:
    <*errors.errorString | 0xc420db8910>: {
        s: "default service account has too many secret references: []v1.ObjectReference{v1.ObjectReference{Kind:\"\", Namespace:\"\", Name:\"default-token-3df8p\", UID:\"\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}, v1.ObjectReference{Kind:\"\", Namespace:\"\", Name:\"default-token-vdg31\", UID:\"\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}}",
    }
    default service account has too many secret references: []v1.ObjectReference{v1.ObjectReference{Kind:"", Namespace:"", Name:"default-token-3df8p", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}, v1.ObjectReference{Kind:"", Namespace:"", Name:"default-token-vdg31", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:104

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:212
Jan 19 17:44:32.333: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:893

Issues about this test specifically: #38439

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ disks ]
+bootstrap-e2e-dynamic-pvc-97e8b2ac-deb0-11e6-a4c3-42010af00002  us-central1-f  1        pd-standard  READY
+bootstrap-e2e-dynamic-pvc-981d151a-deb0-11e6-a4c3-42010af00002  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/3223/
Multiple broken tests:

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:474
Expected error:
    <*errors.StatusError | 0xc420aa8800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "client: etcd member http://127.0.0.1:2379 has no leader",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    client: etcd member http://127.0.0.1:2379 has no leader
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:387

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Jan 25 05:28:36.697: Failed to create endpoints for Gluster server: client: etcd member http://127.0.0.1:2379 has no leader
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:454

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:81
Jan 25 05:26:11.905: Failed to delete pod "pod-secrets-d1638802-e301-11e6-92f5-0242ac110005": client: etcd cluster is unavailable or misconfigured
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:119

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1151
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.65.152 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-vgq6p] []  <nil> Created e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5\nScaling up e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\n Error from server: client: etcd cluster is unavailable or misconfigured\n [] <nil> 0xc420cb6960 exit status 1 <nil> <nil> true [0xc420cca0a0 0xc420cca0b8 0xc420cca0d0] [0xc420cca0a0 0xc420cca0b8 0xc420cca0d0] [0xc420cca0b0 0xc420cca0c8] [0xb9e4d0 0xb9e4d0] 0xc421383740 <nil>}:\nCommand stdout:\nCreated e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5\nScaling up e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\n\nstderr:\nError from server: client: etcd cluster is unavailable or misconfigured\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.65.152 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-vgq6p] []  <nil> Created e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5
    Scaling up e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5 up to 1
    Scaling e2e-test-nginx-rc down to 0
    Update succeeded. Deleting old controller: e2e-test-nginx-rc
     Error from server: client: etcd cluster is unavailable or misconfigured
     [] <nil> 0xc420cb6960 exit status 1 <nil> <nil> true [0xc420cca0a0 0xc420cca0b8 0xc420cca0d0] [0xc420cca0a0 0xc420cca0b8 0xc420cca0d0] [0xc420cca0b0 0xc420cca0c8] [0xb9e4d0 0xb9e4d0] 0xc421383740 <nil>}:
    Command stdout:
    Created e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5
    Scaling up e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-9c5e282f4ad19cd6980f3ea2520eaaf5 up to 1
    Scaling e2e-test-nginx-rc down to 0
    Update succeeded. Deleting old controller: e2e-test-nginx-rc
    
    stderr:
    Error from server: client: etcd cluster is unavailable or misconfigured
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:174

Issues about this test specifically: #26138 #28429 #28737 #38064

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/3565/
Multiple broken tests:

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*strconv.NumError | 0xc421163470>: {
        Func: "ParseInt",
        Num: "",
        Err: {s: "invalid syntax"},
    }
    strconv.ParseInt: parsing "": invalid syntax
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:193

Issues about this test specifically: #36288 #36913

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:100
Expected error:
    <*errors.errorString | 0xc42068af90>: {
        s: "deployment \"nginx\" never updated with the desired condition and reason: [{Available False 2017-02-02 14:48:21.608645019 -0800 PST 2017-02-02 14:48:21.60864533 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-02 14:48:23.141333847 -0800 PST 2017-02-02 14:48:21.587895438 -0800 PST ReplicaSetUpdated ReplicaSet \"nginx-1638191467\" is progressing.}]",
    }
    deployment "nginx" never updated with the desired condition and reason: [{Available False 2017-02-02 14:48:21.608645019 -0800 PST 2017-02-02 14:48:21.60864533 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-02 14:48:23.141333847 -0800 PST 2017-02-02 14:48:21.587895438 -0800 PST ReplicaSetUpdated ReplicaSet "nginx-1638191467" is progressing.}]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1232

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:67
Expected error:
    <*errors.errorString | 0xc420cb0b10>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:295

Issues about this test specifically: #31075 #36286 #38041

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/3692/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:100
Expected error:
    <*errors.errorString | 0xc42100beb0>: {
        s: "deployment \"nginx\" never updated with the desired condition and reason: [{Available False 2017-02-05 20:34:23.810513658 -0800 PST 2017-02-05 20:34:23.810513908 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-05 20:34:23.8625177 -0800 PST 2017-02-05 20:34:23.790916946 -0800 PST ReplicaSetUpdated ReplicaSet \"nginx-1638191467\" is progressing.}]",
    }
    deployment "nginx" never updated with the desired condition and reason: [{Available False 2017-02-05 20:34:23.810513658 -0800 PST 2017-02-05 20:34:23.810513908 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-05 20:34:23.8625177 -0800 PST 2017-02-05 20:34:23.790916946 -0800 PST ReplicaSetUpdated ReplicaSet "nginx-1638191467" is progressing.}]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1232

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:88
Expected error:
    <*errors.errorString | 0xc4206eb630>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:861

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:274
Test Panicked
/usr/local/go/src/runtime/panic.go:458

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants