Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gce-proto: broken test run #42100

Closed
k8s-github-robot opened this issue Feb 25, 2017 · 16 comments
Closed

ci-kubernetes-e2e-gci-gce-proto: broken test run #42100

k8s-github-robot opened this issue Feb 25, 2017 · 16 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/4825/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 32 leaked resources
[ instance-templates ]
+NAME                           MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+bootstrap-e2e-minion-template  n1-standard-2               2017-02-25T04:47:18.244-08:00
[ instance-groups ]
+NAME                        LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+bootstrap-e2e-minion-group  us-central1-f  zone   bootstrap-e2e  Yes      3
[ instances ]
+NAME                             ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+bootstrap-e2e-master             us-central1-f  n1-standard-1               10.240.0.2   35.184.69.40    RUNNING
+bootstrap-e2e-minion-group-21kb  us-central1-f  n1-standard-2               10.240.0.4   35.184.170.84   RUNNING
+bootstrap-e2e-minion-group-pwwf  us-central1-f  n1-standard-2               10.240.0.3   35.184.153.219  RUNNING
+bootstrap-e2e-minion-group-th8m  us-central1-f  n1-standard-2               10.240.0.5   35.184.173.8    RUNNING
[ disks ]
+NAME                             ZONE           SIZE_GB  TYPE         STATUS
+bootstrap-e2e-master             us-central1-f  20       pd-standard  READY
+bootstrap-e2e-master-pd          us-central1-f  20       pd-ssd       READY
+bootstrap-e2e-minion-group-21kb  us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-pwwf  us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-th8m  us-central1-f  100      pd-standard  READY
[ addresses ]
+NAME                     REGION       ADDRESS       STATUS
+bootstrap-e2e-master-ip  us-central1  35.184.69.40  IN_USE
[ routes ]
+bootstrap-e2e-cfe1285e-fb58-11e6-8c6a-42010af00002  bootstrap-e2e  10.180.0.0/24  us-central1-f/instances/bootstrap-e2e-master             1000
+bootstrap-e2e-d0683f1e-fb58-11e6-8c6a-42010af00002  bootstrap-e2e  10.180.1.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-th8m  1000
+bootstrap-e2e-d0f45d26-fb58-11e6-8c6a-42010af00002  bootstrap-e2e  10.180.2.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-pwwf  1000
+bootstrap-e2e-d1011d8f-fb58-11e6-8c6a-42010af00002  bootstrap-e2e  10.180.3.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-21kb  1000
[ routes ]
+default-route-5f12ac1f02b6e9fa                      bootstrap-e2e  10.240.0.0/16                                                           1000
[ routes ]
+default-route-86935f4de8bdcd34                      bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                 1000
[ firewall-rules ]
+NAME                                          NETWORK        SRC_RANGES     RULES                                       SRC_TAGS              TARGET_TAGS
+bootstrap-e2e-default-internal-master         bootstrap-e2e  10.0.0.0/8     tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp                        bootstrap-e2e-master
+bootstrap-e2e-default-internal-node           bootstrap-e2e  10.0.0.0/8     tcp:1-65535,udp:1-65535,icmp                                      bootstrap-e2e-minion
+bootstrap-e2e-default-ssh                     bootstrap-e2e  0.0.0.0/0      tcp:22
+bootstrap-e2e-master-etcd                     bootstrap-e2e                 tcp:2380,tcp:2381                           bootstrap-e2e-master  bootstrap-e2e-master
+bootstrap-e2e-master-https                    bootstrap-e2e  0.0.0.0/0      tcp:443                                                           bootstrap-e2e-master
+bootstrap-e2e-minion-all                      bootstrap-e2e  10.180.0.0/14  tcp,udp,icmp,esp,ah,sctp                                          bootstrap-e2e-minion
+bootstrap-e2e-minion-bootstrap-e2e-http-alt   bootstrap-e2e  0.0.0.0/0      tcp:80,tcp:8080                                                   bootstrap-e2e-minion
+bootstrap-e2e-minion-bootstrap-e2e-nodeports  bootstrap-e2e  0.0.0.0/0      tcp:30000-32767,udp:30000-32767                                   bootstrap-e2e-minion

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:334
Expected error:
    <*errors.errorString | 0xc4203763d0>: {
        s: "watch closed before Until timeout",
    }
    watch closed before Until timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:333

Previous issues for this suite: #36946 #37034 #40447

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Feb 25, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/4867/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:561
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-p323k run run-test --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc420ad3140  error: timed out waiting for the condition\n [] <nil> 0xc42109b5c0 exit status 1 <nil> <nil> true [0xc420094610 0xc420094638 0xc420094648] [0xc420094610 0xc420094638 0xc420094648] [0xc420094618 0xc420094630 0xc420094640] [0xc8abc0 0xc8acc0 0xc8acc0] 0xc421150240 <nil>}:\nCommand stdout:\n\nstderr:\nerror: timed out waiting for the condition\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-p323k run run-test --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc420ad3140  error: timed out waiting for the condition
     [] <nil> 0xc42109b5c0 exit status 1 <nil> <nil> true [0xc420094610 0xc420094638 0xc420094648] [0xc420094610 0xc420094638 0xc420094648] [0xc420094618 0xc420094630 0xc420094640] [0xc8abc0 0xc8acc0 0xc8acc0] 0xc421150240 <nil>}:
    Command stdout:
    
    stderr:
    error: timed out waiting for the condition
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2098

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Expected error:
    <*errors.errorString | 0xc42043f510>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:153

Issues about this test specifically: #32644

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:97
Expected error:
    <*errors.errorString | 0xc4203fcd10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:96

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc42118c350>: {
        s: "Pod name my-hostname-basic-9cc01b28-fc0c-11e6-9759-0242ac110002: Gave up waiting 2m0s for 2 pods to come up",
    }
    Pod name my-hostname-basic-9cc01b28-fc0c-11e6-9759-0242ac110002: Gave up waiting 2m0s for 2 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:143

Issues about this test specifically: #30981

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:91
Expected error:
    <*errors.errorString | 0xc420443200>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:920

Issues about this test specifically: #29828

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:82
Expected error:
    <*errors.errorString | 0xc42123f5d0>: {
        s: "deployment \"test-rollback-deployment\" failed to create new replica set",
    }
    deployment "test-rollback-deployment" failed to create new replica set
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:627

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/garbage_collector.go:442
Feb 26 02:46:31.110: remaining rs post mortem: &v1beta1.ReplicaSetList{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"/apis/extensions/v1beta1/namespaces/e2e-tests-gc-mpp3k/replicasets", ResourceVersion:"9334"}, Items:[]v1beta1.ReplicaSet(nil)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/garbage_collector.go:422

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:282
Expected error:
    <*errors.errorString | 0xc4204132b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:281

Issues about this test specifically: #34372

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:363
Expected error:
    <*errors.errorString | 0xc4203ee0c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:97
The first deployment failed to update to revision 1
Expected error:
    <*errors.errorString | 0xc420ef3410>: {
        s: "deployment \"first-deployment\" failed to create new replica set",
    }
    deployment "first-deployment" failed to create new replica set
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1121

Issues about this test specifically: #31502 #32947 #38646

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:60
Expected error:
    <*errors.errorString | 0xc4203d3cd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:59

Issues about this test specifically: #31938

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:92
Expected error:
    <*errors.errorString | 0xc420aeced0>: {
        s: "rs controller never added the failure condition for replica set \"condition-test\": []v1beta1.ReplicaSetCondition(nil)",
    }
    rs controller never added the failure condition for replica set "condition-test": []v1beta1.ReplicaSetCondition(nil)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:221

Issues about this test specifically: #36554

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:79
Expected error:
    <*errors.errorString | 0xc42044f4d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:78

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc4212bb1d0>: {
        s: "deployment \"test-recreate-deployment\" failed to create new replica set",
    }
    deployment "test-recreate-deployment" failed to create new replica set
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:342

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:61
Expected error:
    <*errors.errorString | 0xc421035660>: {
        s: "deployment \"test-new-deployment\" failed to create new replica set",
    }
    deployment "test-new-deployment" failed to create new replica set
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:224

Issues about this test specifically: #35579

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:98
Expected error:
    <*errors.StatusError | 0xc42175e380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "configmaps \"kube-dns-autoscaler\" not found",
            Reason: "NotFound",
            Details: {
                Name: "kube-dns-autoscaler",
                Group: "",
                Kind: "configmaps",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    configmaps "kube-dns-autoscaler" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:67

Issues about this test specifically: #36569 #38446

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Expected error:
    <*errors.errorString | 0xc420413d20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:185

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:372
Expected error:
    <*errors.errorString | 0xc422034040>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1641

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:79
Expected error:
    <*errors.errorString | 0xc420415620>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:543

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:120
Expected error:
    <*errors.errorString | 0xc4203fd440>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:107

Issues about this test specifically: #28003

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:94
Expected error:
    <*errors.errorString | 0xc420413d20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:970

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] Garbage collector should orphan pods created by rc if delete options say so {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/garbage_collector.go:283
Feb 26 02:35:15.396: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/garbage_collector.go:267

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:85
Expected error:
    <*errors.errorString | 0xc420a30570>: {
        s: "deployment \"test-rollback-no-revision-deployment\" failed to create new replica set",
    }
    deployment "test-rollback-no-revision-deployment" failed to create new replica set
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:745

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:448
Expected error:
    <*errors.errorString | 0xc42044f9e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230

Issues about this test specifically: #28337

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:257
Feb 26 02:26:12.435: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc421aec320>: {
        s: "Pod name my-hostname-private-48eefb26-fc0e-11e6-8cc4-0242ac110002: Gave up waiting 2m0s for 2 pods to come up",
    }
    Pod name my-hostname-private-48eefb26-fc0e-11e6-8cc4-0242ac110002: Gave up waiting 2m0s for 2 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:143

Issues about this test specifically: #32023

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:334
Feb 26 02:28:04.954: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:418
Feb 26 02:32:28.540: Pod ss-0 expected to be re-created at least once
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:397

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Feb 26 02:33:31.944: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Feb 26 02:23:22.262: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:308

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:163
Feb 26 02:27:06.002: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:76
Expected error:
    <*errors.errorString | 0xc420fb25e0>: {
        s: "Pod name rollover-pod: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name rollover-pod: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:453

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:98
Expected error:
    <*errors.errorString | 0xc421366030>: {
        s: "Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:53

Issues about this test specifically: #26191

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:64
Expected error:
    <*errors.errorString | 0xc420e8fe70>: {
        s: "deployment \"test-new-deployment\" failed to create new replica set",
    }
    deployment "test-new-deployment" failed to create new replica set
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:256

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:88
Expected error:
    <*errors.errorString | 0xc421418530>: {
        s: "Pod name nginx: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name nginx: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:863

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:128
Feb 26 02:28:40.032: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1151
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-l1wgx] []  <nil> Created e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee\nScaling up e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee to e2e-test-nginx-rc\n error: timed out waiting for the condition\n [] <nil> 0xc42079b9b0 exit status 1 <nil> <nil> true [0xc420472988 0xc4204729a0 0xc4204729b8] [0xc420472988 0xc4204729a0 0xc4204729b8] [0xc420472998 0xc4204729b0] [0xc8acc0 0xc8acc0] 0xc420767560 <nil>}:\nCommand stdout:\nCreated e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee\nScaling up e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee to e2e-test-nginx-rc\n\nstderr:\nerror: timed out waiting for the condition\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-l1wgx] []  <nil> Created e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee
    Scaling up e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee up to 1
    Scaling e2e-test-nginx-rc down to 0
    Update succeeded. Deleting old controller: e2e-test-nginx-rc
    Renaming e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee to e2e-test-nginx-rc
     error: timed out waiting for the condition
     [] <nil> 0xc42079b9b0 exit status 1 <nil> <nil> true [0xc420472988 0xc4204729a0 0xc4204729b8] [0xc420472988 0xc4204729a0 0xc4204729b8] [0xc420472998 0xc4204729b0] [0xc8acc0 0xc8acc0] 0xc420767560 <nil>}:
    Command stdout:
    Created e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee
    Scaling up e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee up to 1
    Scaling e2e-test-nginx-rc down to 0
    Update succeeded. Deleting old controller: e2e-test-nginx-rc
    Renaming e2e-test-nginx-rc-e46d4fb4184d6b04e0ab4eef84f5e3ee to e2e-test-nginx-rc
    
    stderr:
    error: timed out waiting for the condition
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:174

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] CronJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cronjob.go:185
Expected error:
    <*errors.errorString | 0xc420443200>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cronjob.go:174

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Expected error:
    <*errors.errorString | 0xc42044f9e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:189

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:75
Expected error:
    <*errors.errorString | 0xc420415f70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:73

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc421076cb0>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-02-26 02:17:18 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-02-26 02:17:50 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-02-26 02:17:18 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.180.2.46 StartTime:2017-02-26 02:17:18 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-02-26 02:17:49 -0800 PST,ContainerID:docker://b8a5f95aab9d9a1f3fb18bd5ce6fc71c94b54fe0fd7252370bee9a442a6c622a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://b8a5f95aab9d9a1f3fb18bd5ce6fc71c94b54fe0fd7252370bee9a442a6c622a}] QOSClass:BestEffort}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-02-26 02:17:18 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-02-26 02:17:50 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-02-26 02:17:18 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.180.2.46 StartTime:2017-02-26 02:17:18 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-02-26 02:17:49 -0800 PST,ContainerID:docker://b8a5f95aab9d9a1f3fb18bd5ce6fc71c94b54fe0fd7252370bee9a442a6c622a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://b8a5f95aab9d9a1f3fb18bd5ce6fc71c94b54fe0fd7252370bee9a442a6c622a}] QOSClass:BestEffort}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Garbage collector should delete pods created by rc when not orphaning {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/garbage_collector.go:221
Feb 26 02:17:32.606: failed to wait for all pods to be deleted: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/garbage_collector.go:212

Failed: [k8s.io] AppArmor should enforce an AppArmor profile {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apparmor.go:75
Expected error:
    <*errors.errorString | 0xc4210c22e0>: {
        s: "gave up waiting for pod 'test-apparmor' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'test-apparmor' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apparmor.go:73

Failed: [k8s.io] CronJob should remove from active list jobs that have been deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cronjob.go:227
Expected error:
    <*errors.errorString | 0xc4204132b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cronjob.go:196

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:411
Expected error:
    <*errors.errorString | 0xc4203e80e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:512
Expected error:
    <*errors.errorString | 0xc4203d3cd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1203
Feb 26 02:17:23.522: Failed getting pod controlled by deployment e2e-test-nginx-deployment: Timeout while waiting for pods with label run=e2e-test-nginx-deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1196

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1344
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-g7m6s run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc4212caa00  error: timed out waiting for the condition\n [] <nil> 0xc42204d8f0 exit status 1 <nil> <nil> true [0xc420af67e0 0xc420af6808 0xc420af6818] [0xc420af67e0 0xc420af6808 0xc420af6818] [0xc420af67e8 0xc420af6800 0xc420af6810] [0xc8abc0 0xc8acc0 0xc8acc0] 0xc421a037a0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: timed out waiting for the condition\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-g7m6s run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc4212caa00  error: timed out waiting for the condition
     [] <nil> 0xc42204d8f0 exit status 1 <nil> <nil> true [0xc420af67e0 0xc420af6808 0xc420af6818] [0xc420af67e0 0xc420af6808 0xc420af6818] [0xc420af67e8 0xc420af6800 0xc420af6810] [0xc8abc0 0xc8acc0 0xc8acc0] 0xc421a037a0 <nil>}:
    Command stdout:
    
    stderr:
    error: timed out waiting for the condition
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2098

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] NodeProblemDetector [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:74
Expected error:
    <*errors.errorString | 0xc4203e80e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:73

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:78
Expected error:
    <*errors.errorString | 0xc420413d20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:77

Issues about this test specifically: #38556

Failed: [k8s.io] CronJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cronjob.go:163
Expected error:
    <*errors.errorString | 0xc4203ef6b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cronjob.go:143

Failed: DiffResources {e2e.go}

Error: 17 leaked resources
[ instances ]
+NAME                  ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS
+bootstrap-e2e-master  us-central1-f  n1-standard-1               10.240.0.2   35.184.69.40  STOPPING
[ disks ]
+NAME                     ZONE           SIZE_GB  TYPE         STATUS
+bootstrap-e2e-master     us-central1-f  20       pd-standard  READY
+bootstrap-e2e-master-pd  us-central1-f  20       pd-ssd       READY
[ addresses ]
+NAME                     REGION       ADDRESS       STATUS
+bootstrap-e2e-master-ip  us-central1  35.184.69.40  IN_USE
[ routes ]
+bootstrap-e2e-042666e9-fc0c-11e6-9e84-42010af00002  bootstrap-e2e  10.180.0.0/24  us-central1-f/instances/bootstrap-e2e-master  1000
[ routes ]
+default-route-c708c389c2c37f71                      bootstrap-e2e  10.240.0.0/16                                                1000
+default-route-e16c299cc0d9e042                      bootstrap-e2e  0.0.0.0/0      default-internet-gateway                      1000
[ firewall-rules ]
+NAME                                   NETWORK        SRC_RANGES     RULES                                       SRC_TAGS              TARGET_TAGS
+bootstrap-e2e-default-internal-master  bootstrap-e2e  10.0.0.0/8     tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp                        bootstrap-e2e-master
+bootstrap-e2e-default-internal-node    bootstrap-e2e  10.0.0.0/8     tcp:1-65535,udp:1-65535,icmp                                      bootstrap-e2e-minion
+bootstrap-e2e-default-ssh              bootstrap-e2e  0.0.0.0/0      tcp:22
+bootstrap-e2e-master-etcd              bootstrap-e2e                 tcp:2380,tcp:2381                           bootstrap-e2e-master  bootstrap-e2e-master
+bootstrap-e2e-master-https             bootstrap-e2e  0.0.0.0/0      tcp:443                                                           bootstrap-e2e-master
+bootstrap-e2e-minion-all               bootstrap-e2e  10.180.0.0/14  tcp,udp,icmp,esp,ah,sctp                                          bootstrap-e2e-minion

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:198
Feb 26 02:26:31.467: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:66
Expected
    <int>: 0
to be >=
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:76

Issues about this test specifically: #37144

Failed: [k8s.io] CronJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cronjob.go:79
Expected error:
    <*errors.errorString | 0xc4204155d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cronjob.go:68

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1061
Feb 26 02:33:08.982: Failed getting pod controlled by e2e-test-nginx-deployment: Timeout while waiting for pods with label run=e2e-test-nginx-deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1054

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:92
Feb 26 02:39:51.899: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:67
Expected error:
    <*errors.errorString | 0xc420a94030>: {
        s: "Pod name sample-pod: Gave up waiting 2m0s for 1 pods to come up",
    }
    Pod name sample-pod: Gave up waiting 2m0s for 1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:296

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1154
Feb 26 02:34:13.307: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1092

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:507
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-c6mvt run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure failure-2 -- /bin/sh -c cat && exit 42] []  0xc42110aaa0  error: timed out waiting for the condition\n [] <nil> 0xc421147c80 exit status 1 <nil> <nil> true [0xc420f6caf8 0xc420f6cb20 0xc420f6cb30] [0xc420f6caf8 0xc420f6cb20 0xc420f6cb30] [0xc420f6cb00 0xc420f6cb18 0xc420f6cb28] [0xc8abc0 0xc8acc0 0xc8acc0] 0xc42131f500 <nil>}:\nCommand stdout:\n\nstderr:\nerror: timed out waiting for the condition\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-c6mvt run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure failure-2 -- /bin/sh -c cat && exit 42] []  0xc42110aaa0  error: timed out waiting for the condition
     [] <nil> 0xc421147c80 exit status 1 <nil> <nil> true [0xc420f6caf8 0xc420f6cb20 0xc420f6cb30] [0xc420f6caf8 0xc420f6cb20 0xc420f6cb30] [0xc420f6cb00 0xc420f6cb18 0xc420f6cb28] [0xc8abc0 0xc8acc0 0xc8acc0] 0xc42131f500 <nil>}:
    Command stdout:
    
    stderr:
    error: timed out waiting for the condition
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:493

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Expected error:
    <*errors.errorString | 0xc420415620>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:153

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc42044f4d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1373

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:628
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-q0k1l nginx -- /bin/sh -c /kubectl get pods] []  <nil>  the server doesn't have a resource type \"pods\"\n [] <nil> 0xc4208f8180 exit status 1 <nil> <nil> true [0xc4203a96b8 0xc4203a9718 0xc4203a9768] [0xc4203a96b8 0xc4203a9718 0xc4203a9768] [0xc4203a96f0 0xc4203a9750] [0xc8acc0 0xc8acc0] 0xc420b148a0 <nil>}:\nCommand stdout:\n\nstderr:\nthe server doesn't have a resource type \"pods\"\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-q0k1l nginx -- /bin/sh -c /kubectl get pods] []  <nil>  the server doesn't have a resource type "pods"
     [] <nil> 0xc4208f8180 exit status 1 <nil> <nil> true [0xc4203a96b8 0xc4203a9718 0xc4203a9768] [0xc4203a96b8 0xc4203a9718 0xc4203a9768] [0xc4203a96f0 0xc4203a9750] [0xc8acc0 0xc8acc0] 0xc420b148a0 <nil>}:
    Command stdout:
    
    stderr:
    the server doesn't have a resource type "pods"
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3826

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:73
Expected error:
    <*errors.errorString | 0xc4212ae030>: {
        s: "Pod name cleanup-pod: Gave up waiting 2m0s for 1 pods to come up",
    }
    Pod name cleanup-pod: Gave up waiting 2m0s for 1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:379

Issues about this test specifically: #28339 #36379

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/4885/
Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Feb 26 12:24:39.511: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:308

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:98
Expected error:
    <*errors.StatusError | 0xc420aa7300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "configmaps \"kube-dns-autoscaler\" not found",
            Reason: "NotFound",
            Details: {
                Name: "kube-dns-autoscaler",
                Group: "",
                Kind: "configmaps",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    configmaps "kube-dns-autoscaler" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:67

Issues about this test specifically: #36569 #38446

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Expected error:
    <*errors.errorString | 0xc420415e80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:189

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:78
Expected error:
    <*errors.errorString | 0xc4203eebc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:77

Issues about this test specifically: #38556

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:92
Feb 26 12:38:18.400: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Feb 26 12:34:51.487: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] NodeProblemDetector [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:74
Expected error:
    <*errors.errorString | 0xc420415770>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:73

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/4917/
Multiple broken tests:

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:61
Expected error:
    <*errors.errorString | 0xc4210de500>: {
        s: "expected pod \"pod-projected-secrets-3122c2fa-fced-11e6-be3a-0242ac110003\" success: pods \"pod-projected-secrets-3122c2fa-fced-11e6-be3a-0242ac110003\" not found",
    }
    expected pod "pod-projected-secrets-3122c2fa-fced-11e6-be3a-0242ac110003" success: pods "pod-projected-secrets-3122c2fa-fced-11e6-be3a-0242ac110003" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2198

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:334
Feb 27 05:13:33.604: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Feb 27 05:04:36.289: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #30981

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/4919/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.StatusError | 0xc420358d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"netserver-0\" not found",
            Reason: "NotFound",
            Details: {Name: "netserver-0", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    pods "netserver-0" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Feb 27 06:15:44.984: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.180.1.159:8080/dial?request=hostName&protocol=http&host=10.180.3.136&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:216

Issues about this test specifically: #32375

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Feb 27 06:14:03.466: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.180.3.135:8080/hostName
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:269

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:128
Expected error:
    <*errors.errorString | 0xc420a689d0>: {
        s: "failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-kndvh ss-1 -- /bin/sh -c ls -idlh /data] []  <nil>  error: unable to upgrade connection: container not found (\"nginx\")\n [] <nil> 0xc420c44420 exit status 1 <nil> <nil> true [0xc4203e27a8 0xc4203e27c8 0xc4203e27e0] [0xc4203e27a8 0xc4203e27c8 0xc4203e27e0] [0xc4203e27c0 0xc4203e27d8] [0xc98060 0xc98060] 0xc420a48ba0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: unable to upgrade connection: container not found (\"nginx\")\n\nerror:\nexit status 1\n",
    }
    failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.40 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-kndvh ss-1 -- /bin/sh -c ls -idlh /data] []  <nil>  error: unable to upgrade connection: container not found ("nginx")
     [] <nil> 0xc420c44420 exit status 1 <nil> <nil> true [0xc4203e27a8 0xc4203e27c8 0xc4203e27e0] [0xc4203e27a8 0xc4203e27c8 0xc4203e27e0] [0xc4203e27c0 0xc4203e27d8] [0xc98060 0xc98060] 0xc420a48ba0 <nil>}:
    Command stdout:
    
    stderr:
    error: unable to upgrade connection: container not found ("nginx")
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:106

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/4920/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] NodeProblemDetector [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:74
Expected error:
    <*errors.errorString | 0xc4203cf6c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:73

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:92
Feb 27 07:04:29.586: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:512
Expected error:
    <*errors.errorString | 0xc4203cf090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230

Issues about this test specifically: #32584

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Feb 27 06:48:18.570: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:308

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Expected error:
    <*errors.errorString | 0xc42043e8e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:189

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Feb 27 06:57:03.644: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:78
Expected error:
    <*errors.errorString | 0xc4203c93d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:77

Issues about this test specifically: #38556

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:98
Expected error:
    <*errors.StatusError | 0xc420c08280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "configmaps \"kube-dns-autoscaler\" not found",
            Reason: "NotFound",
            Details: {
                Name: "kube-dns-autoscaler",
                Group: "",
                Kind: "configmaps",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    configmaps "kube-dns-autoscaler" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:67

Issues about this test specifically: #36569 #38446

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:339
Timed out after 300.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*errors.StatusError>: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server could not find the requested resource (get pods pod-configmaps-0a4914a1-fcfb-11e6-859c-0242ac110005)", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc4218b0690), Code:404}}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:336

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/4923/
Multiple broken tests:

Failed: [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:499
Timed out after 300.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*errors.StatusError>: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server could not find the requested resource (get pods pod-projected-configmaps-ad7f0e3a-fd09-11e6-9250-0242ac11000b)", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc4212469b0), Code:404}}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:498

Failed: [k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:45
Expected error:
    <*errors.errorString | 0xc421016a10>: {
        s: "expected pod \"pod-projected-secrets-ade8a809-fd09-11e6-92da-0242ac11000b\" success: pods \"pod-projected-secrets-ade8a809-fd09-11e6-92da-0242ac11000b\" not found",
    }
    expected pod "pod-projected-secrets-ade8a809-fd09-11e6-92da-0242ac11000b" success: pods "pod-projected-secrets-ade8a809-fd09-11e6-92da-0242ac11000b" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2198

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:94
Expected error:
    <*errors.errorString | 0xc420e84090>: {
        s: "failed to wait for pods running: [pods \"nginx-562755153-525s4\" not found pods \"nginx-562755153-vw3d9\" not found pods \"nginx-562755153-djrj3\" not found]",
    }
    failed to wait for pods running: [pods "nginx-562755153-525s4" not found pods "nginx-562755153-vw3d9" not found pods "nginx-562755153-djrj3" not found]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:977

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc4204d3950>: {
        s: "expected \"perms of file \\\"/test-volume/test-file\\\": -rwxrwxrwx\" in container output: Expected\n    <string>: mount type of \"/test-volume\": tmpfs\n    content of file \"/test-volume/test-file\": mount-tester new file\n    \n    perms of file \"/test-volume/test-file\": -rw-rw-rw-\n    \nto contain substring\n    <string>: perms of file \"/test-volume/test-file\": -rwxrwxrwx",
    }
    expected "perms of file \"/test-volume/test-file\": -rwxrwxrwx" in container output: Expected
        <string>: mount type of "/test-volume": tmpfs
        content of file "/test-volume/test-file": mount-tester new file
        
        perms of file "/test-volume/test-file": -rw-rw-rw-
        
    to contain substring
        <string>: perms of file "/test-volume/test-file": -rwxrwxrwx
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2198

Failed: [k8s.io] Projected should be consumable in multiple volumes in the same pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:785
Expected error:
    <*errors.errorString | 0xc4209b9d00>: {
        s: "expected pod \"pod-projected-configmaps-adf23655-fd09-11e6-bde8-0242ac11000b\" success: pods \"pod-projected-configmaps-adf23655-fd09-11e6-bde8-0242ac11000b\" not found",
    }
    expected pod "pod-projected-configmaps-adf23655-fd09-11e6-bde8-0242ac11000b" success: pods "pod-projected-configmaps-adf23655-fd09-11e6-bde8-0242ac11000b" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2198

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Projected should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:907
Expected error:
    <*errors.StatusError | 0xc420df0300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"annotationupdateb0f0aae2-fd09-11e6-bbf4-0242ac11000b\" not found",
            Reason: "NotFound",
            Details: {
                Name: "annotationupdateb0f0aae2-fd09-11e6-bbf4-0242ac11000b",
                Group: "",
                Kind: "pods",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "annotationupdateb0f0aae2-fd09-11e6-bbf4-0242ac11000b" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.StatusError | 0xc4202bb000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"netserver-1\" not found",
            Reason: "NotFound",
            Details: {Name: "netserver-1", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    pods "netserver-1" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

@spxtr
Copy link
Contributor

spxtr commented Feb 27, 2017

This got ultra flaky around here: 70a2685...bf984aa

@spxtr
Copy link
Contributor

spxtr commented Feb 27, 2017

I'm guessing this was caused by #41116. @lukaszo would you take a look?

https://k8s-testgrid.appspot.com/google-gce#gci-gce-proto

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/5078/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:334
Mar  3 01:37:54.049: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:301

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 30 leaked resources
[ instance-templates ]
+NAME                           MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+bootstrap-e2e-minion-template  n1-standard-2               2017-03-03T01:14:55.096-08:00
[ instance-groups ]
+NAME                        LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+bootstrap-e2e-minion-group  us-central1-f  zone   bootstrap-e2e  Yes      3
[ instances ]
+NAME                             ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+bootstrap-e2e-master             us-central1-f  n1-standard-1               10.240.0.2   35.184.109.13   RUNNING
+bootstrap-e2e-minion-group-gsjn  us-central1-f  n1-standard-2               10.240.0.4   35.184.173.8    RUNNING
+bootstrap-e2e-minion-group-n0tf  us-central1-f  n1-standard-2               10.240.0.3   35.184.172.132  RUNNING
+bootstrap-e2e-minion-group-v1cg  us-central1-f  n1-standard-2               10.240.0.5   35.184.170.84   RUNNING
[ disks ]
+NAME                             ZONE           SIZE_GB  TYPE         STATUS
+bootstrap-e2e-master             us-central1-f  20       pd-standard  READY
+bootstrap-e2e-master-pd          us-central1-f  20       pd-ssd       READY
+bootstrap-e2e-minion-group-gsjn  us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-n0tf  us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-v1cg  us-central1-f  100      pd-standard  READY
[ addresses ]
+NAME                     REGION       ADDRESS        STATUS
+bootstrap-e2e-master-ip  us-central1  35.184.109.13  IN_USE
[ routes ]
+bootstrap-e2e-0e2fe5e4-fff2-11e6-8b7d-42010af00002  bootstrap-e2e  10.180.2.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-gsjn  1000
+bootstrap-e2e-0f080be5-fff2-11e6-8b7d-42010af00002  bootstrap-e2e  10.180.3.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-v1cg  1000
+bootstrap-e2e-11c37517-fff2-11e6-8b7d-42010af00002  bootstrap-e2e  10.180.0.0/24  us-central1-f/instances/bootstrap-e2e-master             1000
+bootstrap-e2e-1252fb6a-fff2-11e6-8b7d-42010af00002  bootstrap-e2e  10.180.1.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-n0tf  1000
[ routes ]
+default-route-6f5235dd9f9fdcf9                      bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                 1000
[ routes ]
+default-route-b9ee96f6c41e58cd                      bootstrap-e2e  10.240.0.0/16                                                           1000
[ firewall-rules ]
+NAME                                   NETWORK        SRC_RANGES     RULES                                       SRC_TAGS              TARGET_TAGS
+bootstrap-e2e-default-internal-master  bootstrap-e2e  10.0.0.0/8     tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp                        bootstrap-e2e-master
+bootstrap-e2e-default-internal-node    bootstrap-e2e  10.0.0.0/8     tcp:1-65535,udp:1-65535,icmp                                      bootstrap-e2e-minion
+bootstrap-e2e-default-ssh              bootstrap-e2e  0.0.0.0/0      tcp:22
+bootstrap-e2e-master-etcd              bootstrap-e2e                 tcp:2380,tcp:2381                           bootstrap-e2e-master  bootstrap-e2e-master
+bootstrap-e2e-master-https             bootstrap-e2e  0.0.0.0/0      tcp:443                                                           bootstrap-e2e-master
+bootstrap-e2e-minion-all               bootstrap-e2e  10.180.0.0/14  tcp,udp,icmp,esp,ah,sctp                                          bootstrap-e2e-minion

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454

@lukaszo
Copy link
Contributor

lukaszo commented Mar 3, 2017

@spxtr how my PR may be related? There are no DaemonSet tests in the job.

@calebamiles calebamiles modified the milestone: v1.6 Mar 3, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/5233/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc4212bad80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Pod \"nfs-client\" is invalid: [spec.volumes[0].nfs.server: Required value, spec.containers[0].volumeMounts[0].name: Not found: \"nfs-volume\"]",
            Reason: "Invalid",
            Details: {
                Name: "nfs-client",
                Group: "",
                Kind: "Pod",
                Causes: [
                    {
                        Type: "FieldValueRequired",
                        Message: "Required value",
                        Field: "spec.volumes[0].nfs.server",
                    },
                    {
                        Type: "FieldValueNotFound",
                        Message: "Not found: \"nfs-volume\"",
                        Field: "spec.containers[0].volumeMounts[0].name",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 422,
        },
    }
    Pod "nfs-client" is invalid: [spec.volumes[0].nfs.server: Required value, spec.containers[0].volumeMounts[0].name: Not found: "nfs-volume"]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:74

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:190
Mar  7 18:08:30.985: Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:118

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:339
Expected error:
    <*errors.errorString | 0xc42043d800>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/5242/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:411
Expected error:
    <*url.Error | 0xc42167fa70>: {
        Op: "Post",
        URL: "https://35.184.49.155/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-rf74f/services/rc-light-ctrl/proxy/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10",
        Err: {},
    }
    Post https://35.184.49.155/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-rf74f/services/rc-light-ctrl/proxy/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10: context deadline exceeded
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:276

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:512
Expected error:
    <*errors.errorString | 0xc4203ac4e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:276

Issues about this test specifically: #32584

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:93
Expected error:
    <*errors.errorString | 0xc420da0fb0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 9, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 9, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:999

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:316
Mar  8 02:19:43.369: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2016

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:372
Expected error:
    <*errors.errorString | 0xc421254560>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1662

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:92
Expected error:
    <*errors.errorString | 0xc420d8a5c0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:396

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:318
Expected error:
    <*errors.errorString | 0xc4213126c0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:277

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:69
Expected error:
    <*errors.errorString | 0xc4212dfe50>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63624564877, nsec:12566637, loc:(*time.Location)(0x4d6d520)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624564877, nsec:12566774, loc:(*time.Location)(0x4d6d520)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63624564877, nsec:12566637, loc:(*time.Location)(0x4d6d520)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63624564877, nsec:12566774, loc:(*time.Location)(0x4d6d520)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:314

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:190
Mar  8 02:23:04.861: Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:118

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:129
Expected error:
    <*errors.errorString | 0xc4203fb290>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:74

Issues about this test specifically: #28346

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc4203d0f60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:340
Mar  8 02:16:59.583: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2016

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/5287/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Expected error:
    <*errors.StatusError | 0xc4215c2080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.180.1.146:8080/ConsumeCPU?durationSec=30&millicores=150&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.180.1.146:8080/ConsumeCPU?durationSec=30&millicores=150&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.180.1.146:8080/ConsumeCPU?durationSec=30&millicores=150&requestSizeMillicores=20'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:235

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:61
Expected error:
    <*errors.errorString | 0xc4208c4a90>: {
        s: "expected \"content of file \\\"/etc/projected-secret-volume/new-path-data-1\\\": value-1\" in container output: Expected\n    <string>: mode of file \"/etc/projected-secret-volume/new-path-data-1\": -r--------\n    content of file \"/etc/projected-configmap-volumes/delete/data-1\": value-1\n    unexpected stream type \"\"\nto contain substring\n    <string>: content of file \"/etc/projected-secret-volume/new-path-data-1\": value-1",
    }
    expected "content of file \"/etc/projected-secret-volume/new-path-data-1\": value-1" in container output: Expected
        <string>: mode of file "/etc/projected-secret-volume/new-path-data-1": -r--------
        content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
        unexpected stream type ""
    to contain substring
        <string>: content of file "/etc/projected-secret-volume/new-path-data-1": value-1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2198

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:710
Timed out after 300.001s.
Expected
    <string>: content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    content of file "/etc/projected-configmap-volumes/delete/data-1": value-1
    
to contain substring
    <string>: Error reading file /etc/projected-configmap-volumes/delete/data-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:709

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/5325/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Expected error:
    <*errors.StatusError | 0xc421106300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'dial tcp 10.180.2.76:8080: getsockopt: no route to host'\\nTrying to reach: 'http://10.180.2.76:8080/ConsumeCPU?durationSec=30&millicores=150&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'dial tcp 10.180.2.76:8080: getsockopt: no route to host'\nTrying to reach: 'http://10.180.2.76:8080/ConsumeCPU?durationSec=30&millicores=150&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'dial tcp 10.180.2.76:8080: getsockopt: no route to host'\nTrying to reach: 'http://10.180.2.76:8080/ConsumeCPU?durationSec=30&millicores=150&requestSizeMillicores=20'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:235

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:92
Expected error:
    <*errors.StatusError | 0xc42102b300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'dial tcp 10.180.2.75:8080: getsockopt: connection timed out'\\nTrying to reach: 'http://10.180.2.75:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'dial tcp 10.180.2.75:8080: getsockopt: connection timed out'\nTrying to reach: 'http://10.180.2.75:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'dial tcp 10.180.2.75:8080: getsockopt: connection timed out'\nTrying to reach: 'http://10.180.2.75:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:235

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:57
Expected error:
    <*errors.StatusError | 0xc421266400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'dial tcp 10.240.0.4:10250: getsockopt: connection refused'\\nTrying to reach: 'https://bootstrap-e2e-minion-group-bbgt:10250/metrics'\") has prevented the request from succeeding (get nodes bootstrap-e2e-minion-group-bbgt:10250)",
            Reason: "InternalError",
            Details: {
                Name: "bootstrap-e2e-minion-group-bbgt:10250",
                Group: "",
                Kind: "nodes",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'dial tcp 10.240.0.4:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-bbgt:10250/metrics'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'dial tcp 10.240.0.4:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-bbgt:10250/metrics'") has prevented the request from succeeding (get nodes bootstrap-e2e-minion-group-bbgt:10250)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:55

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/5337/
Multiple broken tests:

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-5bq55" to be ready
Expected error:
    <*errors.errorString | 0xc4203d1780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:131
Mar 11 03:21:04.337: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-tdgs0 to expose endpoints map[pod1:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:70
Expected error:
    <*errors.errorString | 0xc4211e9bf0>: {
        s: "expected pod \"pod-configmaps-91effb70-064c-11e7-b621-0242ac11000a\" success: <nil>",
    }
    expected pod "pod-configmaps-91effb70-064c-11e7-b621-0242ac11000a" success: <nil>
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-mw9pn" to be ready
Expected error:
    <*errors.errorString | 0xc420451bc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32644

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:194
Expected error:
    <*errors.errorString | 0xc42101e570>: {
        s: "Failed to execute a successful GET within 15m0s, Last response body for http://35.184.58.245/foo, host foo.bar.com:\n<html>\r\n<head><title>503 Service Temporarily Unavailable</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>503 Service Temporarily Unavailable</h1></center>\r\n<hr><center>nginx/1.11.9</center>\r\n</body>\r\n</html>\r\n\n\ntimed out waiting for the condition\n",
    }
    Failed to execute a successful GET within 15m0s, Last response body for http://35.184.58.245/foo, host foo.bar.com:
    <html>
    <head><title>503 Service Temporarily Unavailable</title></head>
    <body bgcolor="white">
    <center><h1>503 Service Temporarily Unavailable</h1></center>
    <hr><center>nginx/1.11.9</center>
    </body>
    </html>
    
    
    timed out waiting for the condition
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ingress_utils.go:924

Issues about this test specifically: #38556

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Secrets optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:341
Timed out after 240.000s.
Expected
    <string>: 
to contain substring
    <string>: Error reading file /etc/secret-volumes/create/data-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:308

@spxtr spxtr removed their assignment Mar 13, 2017
@calebamiles
Copy link
Contributor

Closing this issue due to pollution from seemingly unrelated test failures.

cc: @ethernetdan, @kubernetes/release-team, @kubernetes/test-infra-maintainers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

4 participants