Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-test: broken test run #39211

Closed
k8s-github-robot opened this issue Dec 24, 2016 · 25 comments
Closed

ci-kubernetes-e2e-gci-gke-test: broken test run #39211

k8s-github-robot opened this issue Dec 24, 2016 · 25 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/127/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Dec 23 16:05:43.694: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 17:00:58.770: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422b23400), (*api.Node)(0xc422b23678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:240
Expected success, but got an error:
    <*errors.errorString | 0xc4203d2f50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:232

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 17:07:35.726: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214bca00), (*api.Node)(0xc4214bcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 17:15:22.934: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4229ba000), (*api.Node)(0xc4229ba278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 17:11:09.283: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422f28000), (*api.Node)(0xc422f28278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 17:04:22.485: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422423400), (*api.Node)(0xc422423678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 16:41:02.335: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421634000), (*api.Node)(0xc421634278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 16:57:32.465: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422c72a00), (*api.Node)(0xc422c72c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ disks ]
+NAME                                                             ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-e79a-pvc-98501167-c96a-11e6-b3de-42010af00011  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Previous issues for this suite: #37522 #38580

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Dec 24, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/129/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:180
Expected error:
    <*errors.errorString | 0xc422260170>: {
        s: "expected pod \"downwardapi-volume-93b446d0-c9cf-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'downwardapi-volume-93b446d0-c9cf-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-93b446d0-c9cf-11e6-b23e-0242ac110009" success: gave up waiting for pod 'downwardapi-volume-93b446d0-c9cf-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
Expected error:
    <*errors.errorString | 0xc42228ba30>: {
        s: "expected pod \"\" success: gave up waiting for pod 'pod-service-account-f8996db6-c9d7-11e6-b23e-0242ac110009-l8kpr' to be 'success or failure' after 5m0s",
    }
    expected pod "" success: gave up waiting for pod 'pod-service-account-f8996db6-c9d7-11e6-b23e-0242ac110009-l8kpr' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37526

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc421496910>: {
        s: "expected pod \"downwardapi-volume-b4e28e92-c9c9-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'downwardapi-volume-b4e28e92-c9c9-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-b4e28e92-c9c9-11e6-b23e-0242ac110009" success: gave up waiting for pod 'downwardapi-volume-b4e28e92-c9c9-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37423

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:171
Expected error:
    <*errors.errorString | 0xc421583690>: {
        s: "expected pod \"downwardapi-volume-32ec0804-c9c7-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'downwardapi-volume-32ec0804-c9c7-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-32ec0804-c9c7-11e6-b23e-0242ac110009" success: gave up waiting for pod 'downwardapi-volume-32ec0804-c9c7-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:196
Expected error:
    <*errors.errorString | 0xc4220bd530>: {
        s: "expected pod \"downwardapi-volume-3b1f58b8-c9cb-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'downwardapi-volume-3b1f58b8-c9cb-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-3b1f58b8-c9cb-11e6-b23e-0242ac110009" success: gave up waiting for pod 'downwardapi-volume-3b1f58b8-c9cb-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:56
Expected error:
    <*errors.errorString | 0xc421965830>: {
        s: "expected pod \"pod-secrets-b5843284-c9db-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'pod-secrets-b5843284-c9db-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-b5843284-c9db-11e6-b23e-0242ac110009" success: gave up waiting for pod 'pod-secrets-b5843284-c9db-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37529

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:150
Expected error:
    <*errors.errorString | 0xc421410ab0>: {
        s: "expected pod \"pod-secrets-536a189b-c9be-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'pod-secrets-536a189b-c9be-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-536a189b-c9be-11e6-b23e-0242ac110009" success: gave up waiting for pod 'pod-secrets-536a189b-c9be-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
Expected error:
    <*errors.errorString | 0xc421cc33d0>: {
        s: "expected pod \"pod-99024197-c9da-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'pod-99024197-c9da-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-99024197-c9da-11e6-b23e-0242ac110009" success: gave up waiting for pod 'pod-99024197-c9da-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:59
Expected error:
    <*errors.errorString | 0xc4221c4f90>: {
        s: "expected pod \"pod-configmaps-6e87309e-c9ce-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'pod-configmaps-6e87309e-c9ce-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-6e87309e-c9ce-11e6-b23e-0242ac110009" success: gave up waiting for pod 'pod-configmaps-6e87309e-c9ce-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32949

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:77
Expected error:
    <*errors.errorString | 0xc421759f70>: {
        s: "expected pod \"pod-secrets-ed6ab2ab-c9c7-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'pod-secrets-ed6ab2ab-c9c7-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-ed6ab2ab-c9c7-11e6-b23e-0242ac110009" success: gave up waiting for pod 'pod-secrets-ed6ab2ab-c9c7-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37525

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:48
Expected error:
    <*errors.errorString | 0xc4221b01f0>: {
        s: "expected pod \"downwardapi-volume-6ce2d485-c9cd-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'downwardapi-volume-6ce2d485-c9cd-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-6ce2d485-c9cd-11e6-b23e-0242ac110009" success: gave up waiting for pod 'downwardapi-volume-6ce2d485-c9cd-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31836

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc4213f05a0>: {
        s: "expected pod \"pod-23515e2c-c9c5-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'pod-23515e2c-c9c5-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-23515e2c-c9c5-11e6-b23e-0242ac110009" success: gave up waiting for pod 'pod-23515e2c-c9c5-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37500

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc421dc0770>: {
        s: "expected pod \"pod-f57da865-c9c8-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'pod-f57da865-c9c8-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-f57da865-c9c8-11e6-b23e-0242ac110009" success: gave up waiting for pod 'pod-f57da865-c9c8-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #33987

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc4203d2dd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc4221c4ff0>: {
        s: "expected pod \"pod-fa1876da-c9cb-11e6-b23e-0242ac110009\" success: gave up waiting for pod 'pod-fa1876da-c9cb-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-fa1876da-c9cb-11e6-b23e-0242ac110009" success: gave up waiting for pod 'pod-fa1876da-c9cb-11e6-b23e-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31400

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/134/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc42263e420>: {
        s: "expected pod \"downwardapi-volume-e1d2981d-cb1f-11e6-80a9-0242ac110004\" success: gave up waiting for pod 'downwardapi-volume-e1d2981d-cb1f-11e6-80a9-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-e1d2981d-cb1f-11e6-80a9-0242ac110004" success: gave up waiting for pod 'downwardapi-volume-e1d2981d-cb1f-11e6-80a9-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37423

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:58
Expected error:
    <*errors.errorString | 0xc421dd5040>: {
        s: "expected pod \"downwardapi-volume-8dbf1786-cb3d-11e6-80a9-0242ac110004\" success: gave up waiting for pod 'downwardapi-volume-8dbf1786-cb3d-11e6-80a9-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-8dbf1786-cb3d-11e6-80a9-0242ac110004" success: gave up waiting for pod 'downwardapi-volume-8dbf1786-cb3d-11e6-80a9-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36300

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:196
Expected error:
    <*errors.errorString | 0xc422322c80>: {
        s: "expected pod \"downwardapi-volume-409835dc-cb3f-11e6-80a9-0242ac110004\" success: gave up waiting for pod 'downwardapi-volume-409835dc-cb3f-11e6-80a9-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-409835dc-cb3f-11e6-80a9-0242ac110004" success: gave up waiting for pod 'downwardapi-volume-409835dc-cb3f-11e6-80a9-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
Expected error:
    <*errors.errorString | 0xc421874b80>: {
        s: "expected pod \"\" success: gave up waiting for pod 'pod-service-account-573bb9a2-cb10-11e6-80a9-0242ac110004-wbsvn' to be 'success or failure' after 5m0s",
    }
    expected pod "" success: gave up waiting for pod 'pod-service-account-573bb9a2-cb10-11e6-80a9-0242ac110004-wbsvn' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37526

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:77
Expected error:
    <*errors.errorString | 0xc4220056b0>: {
        s: "expected pod \"pod-secrets-7b620e59-cb1b-11e6-80a9-0242ac110004\" success: gave up waiting for pod 'pod-secrets-7b620e59-cb1b-11e6-80a9-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-7b620e59-cb1b-11e6-80a9-0242ac110004" success: gave up waiting for pod 'pod-secrets-7b620e59-cb1b-11e6-80a9-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37525

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:40
Expected error:
    <*errors.errorString | 0xc422632080>: {
        s: "expected pod \"pod-secrets-66b07ec0-cb1c-11e6-80a9-0242ac110004\" success: gave up waiting for pod 'pod-secrets-66b07ec0-cb1c-11e6-80a9-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-66b07ec0-cb1c-11e6-80a9-0242ac110004" success: gave up waiting for pod 'pod-secrets-66b07ec0-cb1c-11e6-80a9-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35256

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/139/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:51:50.076: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d558f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:41:07.007: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217838f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 13:22:27.298: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218f04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:14:46.011: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a64ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 10:45:03.307: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42250aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 13:12:21.552: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42199eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:10:32.805: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420abf8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc4203d1710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33887

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:148
Expected error:
    <*errors.errorString | 0xc4203d1710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:122

Issues about this test specifically: #31428

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:07:19.454: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fd98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421f5c6d0>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:03:01.561: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42023f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 12:01:45.112: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422004ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 10:48:50.269: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209084f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31502 #32947 #38646

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42224e7c0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:37:55.727: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217864f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30981

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 13:38:50.808: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421508ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28346

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 12:18:13.070: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c978f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:44:18.235: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218c58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Dec 27 10:28:27.922: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'ssh: rejected: connect failed (Connection refused)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-84ea1bb7-pd6j:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 14:49:31.889: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420696ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:58:31.992: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216664f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:31:28.432: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e52ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 12:41:02.900: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422a484f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 12:31:23.677: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42127cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 13:05:52.855: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ccc4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc42153cca0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-84ea1bb7-pd6j boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-84ea1bb7-pd6j boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 10:58:41.878: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a64ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc4203d1710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 13:02:36.432: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218c58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37774

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:55:02.078: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217838f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 13:09:08.145: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213578f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 12:46:51.786: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420696ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 10:52:03.466: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421356ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 15:25:40.128: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42194cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:34:41.202: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221184f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 10:55:30.290: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211a84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 12:37:50.057: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42023f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 12:24:53.549: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42222e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28283

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 11:47:32.525: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42023f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 13:19:13.794: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206978f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 13:15:34.735: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ef24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 27 15:28:59.995: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420908278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/141/
Multiple broken tests:

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc421aa7610>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
Dec 28 03:14:07.171: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:943
Dec 28 03:21:59.367: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26126 #30653 #36408

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/142/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224e9bf0>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225285b0>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc42190ccd0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 19, 170],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.19.170:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214a17b0>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422a39100>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422661340>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221fd1b0>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e9ff50>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422f64d20>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421fef040>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421359cd0>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216f2b60>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220b54f0>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422b80600>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220b4e60>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422f78db0>: {
        s: "Namespace e2e-tests-services-57kbp is active",
    }
    Namespace e2e-tests-services-57kbp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/144/
Multiple broken tests:

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc420724100>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc42179a010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1253
Dec 29 02:29:52.835: Failed getting pod e2e-test-nginx-pod: Timeout while waiting for pods with labels "run=e2e-test-nginx-pod" to be running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1231

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:310
Expected error:
    <*errors.errorString | 0xc4203ab300>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #35793

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234
starting pod liveness-http in namespace e2e-tests-container-probe-2rn0t
Expected error:
    <*errors.errorString | 0xc4203ab300>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:364

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc421f8e110>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Dec 29 05:26:22.869: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc422d5e180>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Dec 29 00:43:11.273: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38439

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:188
Expected error:
    <*errors.errorString | 0xc4203ab300>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:169

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc421f80310>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618600919, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618600919, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618600993, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618600993, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3837372172\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618600919, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618600919, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618600993, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618600993, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3837372172\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1335

Issues about this test specifically: #31697 #36574

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:68
Expected error:
    <*errors.errorString | 0xc421d0c010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:371

Issues about this test specifically: #27232

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1087
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.179.191 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-wftbm] []  <nil> Created e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0\nScaling up e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0 up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc42119a690 exit status 1 <nil> <nil> true [0xc420e94948 0xc420e94960 0xc420e94978] [0xc420e94948 0xc420e94960 0xc420e94978] [0xc420e94958 0xc420e94970] [0x970e80 0x970e80] 0xc42119f920 <nil>}:\nCommand stdout:\nCreated e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0\nScaling up e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0 up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.179.191 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-wftbm] []  <nil> Created e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0
    Scaling up e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0 up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc42119a690 exit status 1 <nil> <nil> true [0xc420e94948 0xc420e94960 0xc420e94978] [0xc420e94948 0xc420e94960 0xc420e94978] [0xc420e94958 0xc420e94970] [0x970e80 0x970e80] 0xc42119f920 <nil>}:
    Command stdout:
    Created e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0
    Scaling up e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-9c23aac400fd11bcd538cc57dc4e39a0 up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:169

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Dec 29 03:01:32.548: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:261
Expected error:
    <*errors.errorString | 0xc4203ab300>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:202

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:604
Dec 29 00:09:54.438: timed out waiting for container restart in pod=pod-back-off-image/back-off
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:598

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Dec 29 03:23:59.439: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc4226ca010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:477

Issues about this test specifically: #28339 #36379

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/146/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42255e730>: {
        s: "Namespace e2e-tests-services-kbx4f is active",
    }
    Namespace e2e-tests-services-kbx4f is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e21f80>: {
        s: "Namespace e2e-tests-services-kbx4f is active",
    }
    Namespace e2e-tests-services-kbx4f is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc4236dbfb0>: {
        s: "expected pod \"downwardapi-volume-1682776e-ce50-11e6-810e-0242ac110003\" success: gave up waiting for pod 'downwardapi-volume-1682776e-ce50-11e6-810e-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-1682776e-ce50-11e6-810e-0242ac110003" success: gave up waiting for pod 'downwardapi-volume-1682776e-ce50-11e6-810e-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37423

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c63c80>: {
        s: "Namespace e2e-tests-services-kbx4f is active",
    }
    Namespace e2e-tests-services-kbx4f is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421600cf0>: {
        s: "error while stopping RC: service2: timed out waiting for \"service2\" to be synced",
    }
    error while stopping RC: service2: timed out waiting for "service2" to be synced
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421fa5080>: {
        s: "Namespace e2e-tests-services-kbx4f is active",
    }
    Namespace e2e-tests-services-kbx4f is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/159/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc420b32df0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 00:03:48.767: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212f2c78), (*api.Node)(0xc4212f2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 21:19:29.327: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a3ac78), (*api.Node)(0xc420a3aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 19:52:56.684: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215e5678), (*api.Node)(0xc4215e58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc420352cb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 22:57:53.695: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202b5678), (*api.Node)(0xc4202b58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 19:22:52.411: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421120c78), (*api.Node)(0xc421120ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 00:26:17.111: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d8cc78), (*api.Node)(0xc420d8cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26191

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 23:07:57.303: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ab8c78), (*api.Node)(0xc421ab8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36948

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Jan  2 21:34:03.276: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'ssh: rejected: connect failed (Connection refused)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb:10250/stats/?timeout=5m0s'") has prevented the request from succeeding an error on the server ("Error: 'ssh: rejected: connect failed (Connection refused)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-9acfd5fa-jltd:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28283

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 00:07:06.068: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421590c78), (*api.Node)(0xc421590ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 19:08:04.490: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211d4c78), (*api.Node)(0xc4211d4ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 00:22:58.788: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215ce278), (*api.Node)(0xc4215ce4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37774

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 19:19:05.946: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e3ec78), (*api.Node)(0xc420e3eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 01:20:00.590: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bd6278), (*api.Node)(0xc420bd64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42282b8e0>: {
        s: "8 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nkube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nl7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\n",
    }
    8 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    kube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    l7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 01:37:50.640: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421449678), (*api.Node)(0xc4214498f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421267da0>: {
        s: "8 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nkube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nl7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\n",
    }
    8 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    kube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    l7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 20:25:59.880: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f18c78), (*api.Node)(0xc420f18ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 20:06:17.272: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a3ec78), (*api.Node)(0xc421a3eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 00:00:33.544: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219c4c78), (*api.Node)(0xc4219c4ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 20:43:14.585: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42149c278), (*api.Node)(0xc42149c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 00:41:05.674: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219a7678), (*api.Node)(0xc4219a78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 00:37:23.365: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bd6278), (*api.Node)(0xc420bd64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 20:46:35.317: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206aec78), (*api.Node)(0xc4206aeef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35277

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 00:29:34.292: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201eac78), (*api.Node)(0xc4201eaef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc420352cb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 21:06:41.604: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209bf678), (*api.Node)(0xc4209bf8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32936

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 19:11:26.039: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f17678), (*api.Node)(0xc420f178f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 18:48:26.188: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421028c78), (*api.Node)(0xc421028ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 23:17:42.992: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420df7678), (*api.Node)(0xc420df78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 01:09:49.112: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42101ac78), (*api.Node)(0xc42101aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 00:33:53.805: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b8a278), (*api.Node)(0xc421b8a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 19:15:37.221: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ef8278), (*api.Node)(0xc420ef84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28084

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421236330>: {
        s: "8 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nkube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nl7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\n",
    }
    8 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    kube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    l7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 19:45:07.820: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218dcc78), (*api.Node)(0xc4218dcef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37914

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 20:32:40.509: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201eac78), (*api.Node)(0xc4201eaef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29050

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 01:44:31.308: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b85678), (*api.Node)(0xc421b858f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc420352cb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e49f60>: {
        s: "8 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nkube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nl7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\n",
    }
    8 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    kube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    l7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 22:47:55.801: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214a2c78), (*api.Node)(0xc4214a2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  3 01:50:59.888: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dd0c78), (*api.Node)(0xc420dd0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 19:39:39.786: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214c0278), (*api.Node)(0xc4214c04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 21:26:37.207: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420df7678), (*api.Node)(0xc420df78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 22:35:10.006: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212de278), (*api.Node)(0xc4212de4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 21:40:31.386: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420df6278), (*api.Node)(0xc420df64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 20:22:48.837: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421820278), (*api.Node)(0xc4218204f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b9b9c0>: {
        s: "8 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nkube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nl7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\n",
    }
    8 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    kube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    kubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    l7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 20:36:32.226: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209bec78), (*api.Node)(0xc4209beef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 23:04:45.119: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e4d678), (*api.Node)(0xc420e4d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 23:38:07.544: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420387678), (*api.Node)(0xc4203878f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 23:27:33.078: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421774278), (*api.Node)(0xc4217744f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420a946c0>: {
        s: "8 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-dns-4101612645-zhs0m                                          gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:31 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nkube-dns-autoscaler-2715466192-r0k10                               gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:52 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb            gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd            gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:19 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]\nkubernetes-dashboard-3543765157-b54r0                              gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:22 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\nl7-default-backend-2234341178-jdv7d                                gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:14:51 -0800 PST  }]\n",
    }
    8 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb gke-bootstrap-e2e-default-pool-9acfd5fa-b4fb Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:11 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9acfd5fa-jltd gke-bootstrap-e2e-default-pool-9acfd5fa-jltd Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:13:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 18:15:00 -0800 PST  } {PodScheduled T

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/171/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 16:00:07.767: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420393400), (*api.Node)(0xc420393678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 16:17:14.513: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223faa00), (*api.Node)(0xc4223fac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan  6 13:57:43.470: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227829b0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Jan  6 16:44:02.525: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'ssh: rejected: connect failed (Connection refused)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-c2f87cc5-fss0:10250/stats/?timeout=5m0s'") has prevented the request from succeeding an error on the server ("Error: 'ssh: rejected: connect failed (Connection refused)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-c2f87cc5-snwl:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 16:33:26.243: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f65400), (*api.Node)(0xc420f65678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38083

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 15:56:56.341: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c97400), (*api.Node)(0xc421c97678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 16:36:39.750: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422dc0000), (*api.Node)(0xc422dc0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37515

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-d9e4-pvc-08418e14-d459-11e6-ba84-42010af00041  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 15:53:37.991: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42230b400), (*api.Node)(0xc42230b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 16:06:46.602: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ec9400), (*api.Node)(0xc421ec9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 16:21:07.516: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422e1e000), (*api.Node)(0xc422e1e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30632

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 16:28:02.069: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42176e000), (*api.Node)(0xc42176e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 14:30:00.402: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422fea000), (*api.Node)(0xc422fea278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 16:43:01.260: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ed0a00), (*api.Node)(0xc421ed0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 15:50:24.822: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ec8a00), (*api.Node)(0xc421ec8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 14:33:11.455: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42228f400), (*api.Node)(0xc42228f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 14:36:26.806: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a3c000), (*api.Node)(0xc421a3c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 15:47:07.555: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422d66000), (*api.Node)(0xc422d66278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 16:03:21.198: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42228e000), (*api.Node)(0xc42228e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32122 #38040

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/177/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b3b2e0>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422aaecb0>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421fdc760>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4231ebcc0>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4230d5b60>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423080c30>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422e004f0>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4226df7f0>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422999400>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422d950f0>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4226f89a0>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42336e1c0>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423104050>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c7d690>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4223c3ec0>: {
        s: "Namespace e2e-tests-services-7v0vn is active",
    }
    Namespace e2e-tests-services-7v0vn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42130a800>: {
        s: "error while stopping RC: service2: timed out waiting for \"service2\" to be synced",
    }
    error while stopping RC: service2: timed out waiting for "service2" to be synced
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/178/
Multiple broken tests:

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
Expected error:
    <*errors.errorString | 0xc421f9af00>: {
        s: "expected pod \"client-containers-553690f6-d614-11e6-975a-0242ac110002\" success: gave up waiting for pod 'client-containers-553690f6-d614-11e6-975a-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-553690f6-d614-11e6-975a-0242ac110002" success: gave up waiting for pod 'client-containers-553690f6-d614-11e6-975a-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29994

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:34
Expected error:
    <*errors.errorString | 0xc421c0e530>: {
        s: "expected pod \"client-containers-01d46e6e-d61e-11e6-975a-0242ac110002\" success: gave up waiting for pod 'client-containers-01d46e6e-d61e-11e6-975a-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-01d46e6e-d61e-11e6-975a-0242ac110002" success: gave up waiting for pod 'client-containers-01d46e6e-d61e-11e6-975a-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34520

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:64
Expected error:
    <*errors.errorString | 0xc4225b4820>: {
        s: "expected pod \"client-containers-c6500f62-d631-11e6-975a-0242ac110002\" success: gave up waiting for pod 'client-containers-c6500f62-d631-11e6-975a-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-c6500f62-d631-11e6-975a-0242ac110002" success: gave up waiting for pod 'client-containers-c6500f62-d631-11e6-975a-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29467

Failed: DiffResources {e2e.go}

Error: 16 leaked resources
[ instance-templates ]
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-8dd51976  n1-standard-2               2017-01-08T16:02:35.243-08:00
[ instance-groups ]
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-8dd51976-grp  us-central1-f  zone   bootstrap-e2e  Yes      0
[ instances ]
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-8dd51976-9d9c  us-central1-f  n1-standard-2               10.240.0.2   146.148.80.138  STOPPING
+gke-bootstrap-e2e-default-pool-8dd51976-bhst  us-central1-f  n1-standard-2               10.240.0.4   35.184.23.195   STOPPING
+gke-bootstrap-e2e-default-pool-8dd51976-hr8x  us-central1-f  n1-standard-2               10.240.0.3   104.154.220.24  STOPPING
[ disks ]
+gke-bootstrap-e2e-default-pool-8dd51976-9d9c                     us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-8dd51976-bhst                     us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-8dd51976-hr8x                     us-central1-f  100      pd-standard  READY
[ routes ]
+default-route-3558bf2fe471d34d                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
[ routes ]
+default-route-c2a0ae5c9b547a92                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
[ routes ]
+gke-bootstrap-e2e-b5e1d067-4c45f6ff-d5ff-11e6-96c8-42010af0002d  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-8dd51976-bhst  1000
+gke-bootstrap-e2e-b5e1d067-4d4f70a7-d5ff-11e6-96c8-42010af0002d  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-8dd51976-hr8x  1000
+gke-bootstrap-e2e-b5e1d067-9df47585-d63e-11e6-a183-42010af00009  bootstrap-e2e  10.72.4.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-8dd51976-9d9c  1000

Issues about this test specifically: #33373 #33416 #34060

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/179/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ea97a0>: {
        s: "Namespace e2e-tests-services-cxftn is active",
    }
    Namespace e2e-tests-services-cxftn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:808
Jan  9 04:36:27.186: Could not reach HTTP service through 104.154.220.24:30276 after 5m0s: received non-success return status "404 Not Found" trying to access http://104.154.220.24:30276/echo?msg=hello; got body: default backend - 404
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2443

Issues about this test specifically: #26134

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42131b9a0>: {
        s: "Namespace e2e-tests-services-cxftn is active",
    }
    Namespace e2e-tests-services-cxftn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42001c190>: {s: "unexpected EOF"}
    unexpected EOF
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1693

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4234941a0>: {
        s: "Namespace e2e-tests-services-cxftn is active",
    }
    Namespace e2e-tests-services-cxftn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422af86c0>: {
        s: "Namespace e2e-tests-services-cxftn is active",
    }
    Namespace e2e-tests-services-cxftn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4234e5f40>: {
        s: "Namespace e2e-tests-services-cxftn is active",
    }
    Namespace e2e-tests-services-cxftn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221c3850>: {
        s: "Namespace e2e-tests-services-cxftn is active",
    }
    Namespace e2e-tests-services-cxftn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a1d970>: {
        s: "Namespace e2e-tests-services-cxftn is active",
    }
    Namespace e2e-tests-services-cxftn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42217b550>: {
        s: "Namespace e2e-tests-services-cxftn is active",
    }
    Namespace e2e-tests-services-cxftn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227d0cb0>: {
        s: "Namespace e2e-tests-services-cxftn is active",
    }
    Namespace e2e-tests-services-cxftn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/182/
Multiple broken tests:

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc420c70050>: {
        s: "expected pod \"pod-configmaps-eb9b5965-d70e-11e6-9c90-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-eb9b5965-d70e-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-eb9b5965-d70e-11e6-9c90-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-eb9b5965-d70e-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #27245

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc4224fe150>: {
        s: "expected pod \"pod-secrets-ab51800d-d714-11e6-9c90-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-ab51800d-d714-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-ab51800d-d714-11e6-9c90-0242ac11000b" success: gave up waiting for pod 'pod-secrets-ab51800d-d714-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29221

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:189
Expected error:
    <*errors.errorString | 0xc421300430>: {
        s: "expected pod \"downwardapi-volume-0ec91255-d70c-11e6-9c90-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-0ec91255-d70c-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-0ec91255-d70c-11e6-9c90-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-0ec91255-d70c-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc4223249b0>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc42232c170>: {
        s: "expected pod \"pod-2e410607-d718-11e6-9c90-0242ac11000b\" success: gave up waiting for pod 'pod-2e410607-d718-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-2e410607-d718-11e6-9c90-0242ac11000b" success: gave up waiting for pod 'pod-2e410607-d718-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37500

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
Expected error:
    <*errors.errorString | 0xc4220cf240>: {
        s: "expected pod \"pod-secrets-84860ac2-d712-11e6-9c90-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-84860ac2-d712-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-84860ac2-d712-11e6-9c90-0242ac11000b" success: gave up waiting for pod 'pod-secrets-84860ac2-d712-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:150
Expected error:
    <*errors.errorString | 0xc42252a390>: {
        s: "expected pod \"pod-secrets-ae50f158-d713-11e6-9c90-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-ae50f158-d713-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-ae50f158-d713-11e6-9c90-0242ac11000b" success: gave up waiting for pod 'pod-secrets-ae50f158-d713-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.StatusError | 0xc4214a1c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (post services rc-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (post services rc-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:59
Expected error:
    <*errors.errorString | 0xc42178a700>: {
        s: "expected pod \"pod-configmaps-672ad320-d711-11e6-9c90-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-672ad320-d711-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-672ad320-d711-11e6-9c90-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-672ad320-d711-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32949

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
Expected error:
    <*errors.errorString | 0xc422937e20>: {
        s: "expected pod \"\" success: gave up waiting for pod 'pod-service-account-f6f97803-d739-11e6-9c90-0242ac11000b-5tgg8' to be 'success or failure' after 5m0s",
    }
    expected pod "" success: gave up waiting for pod 'pod-service-account-f6f97803-d739-11e6-9c90-0242ac11000b-5tgg8' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37526

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc4225b8b90>: {
        s: "expected pod \"pod-216512f6-d719-11e6-9c90-0242ac11000b\" success: gave up waiting for pod 'pod-216512f6-d719-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-216512f6-d719-11e6-9c90-0242ac11000b" success: gave up waiting for pod 'pod-216512f6-d719-11e6-9c90-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #33987

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/190/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4230c2680>: {
        s: "Namespace e2e-tests-services-jt7xw is active",
    }
    Namespace e2e-tests-services-jt7xw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4239baaf0>: {
        s: "Namespace e2e-tests-services-jt7xw is active",
    }
    Namespace e2e-tests-services-jt7xw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc4238252a0>: {
        s: "Unable to get server version: the server cannot complete the requested operation at this time, try again later",
    }
    Unable to get server version: the server cannot complete the requested operation at this time, try again later
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:233

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422ff97c0>: {
        s: "Namespace e2e-tests-services-jt7xw is active",
    }
    Namespace e2e-tests-services-jt7xw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4229d1900>: {
        s: "Namespace e2e-tests-services-jt7xw is active",
    }
    Namespace e2e-tests-services-jt7xw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4233161b0>: {
        s: "Namespace e2e-tests-services-jt7xw is active",
    }
    Namespace e2e-tests-services-jt7xw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ead7b0>: {
        s: "Namespace e2e-tests-services-jt7xw is active",
    }
    Namespace e2e-tests-services-jt7xw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227d56d0>: {
        s: "Namespace e2e-tests-services-jt7xw is active",
    }
    Namespace e2e-tests-services-jt7xw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc422ce7090>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 77, 220],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.77.220:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4230815f0>: {
        s: "Namespace e2e-tests-services-jt7xw is active",
    }
    Namespace e2e-tests-services-jt7xw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/225/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1253
Jan 14 15:07:16.562: Failed getting pod e2e-test-nginx-pod: Timeout while waiting for pods with labels "run=e2e-test-nginx-pod" to be running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1231

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:188
Expected error:
    <*errors.errorString | 0xc4203aacb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:101

Issues about this test specifically: #36564

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Jan 14 16:29:53.314: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38439

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 14 14:17:35.785: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Jan 14 15:17:28.805: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37774

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:261
Expected error:
    <*errors.errorString | 0xc4203aacb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:202

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc422128100>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:310
Expected error:
    <*errors.errorString | 0xc4203aacb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #35793

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc422882000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/227/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:30:57.926: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4200f4a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 15 02:40:03.136: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 05:48:05.988: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4204b6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc4203d0db0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34250

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:11:25.181: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b6b400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35277

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:21:05.182: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421503400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 05:41:41.570: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d48a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32936

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:24:29.494: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215ce000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 06:27:40.995: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ec4000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 06:14:14.016: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421900000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 04:27:47.199: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b59400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:41:19.277: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420983400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 04:21:22.442: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421180a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30851

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:47:49.398: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420648000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 04:17:53.724: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fd2000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27703 #32981 #35286

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 04:24:33.711: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420da8a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31575 #32756

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 04:31:02.356: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420283400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:27:44.777: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42145d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 06:17:31.147: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fcaa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 05:51:23.085: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216f3400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:44:38.367: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d1e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 06:07:49.382: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421830a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:34:50.889: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420754a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 06:20:49.425: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216f3400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421cc8380>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:17:47.879: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fe5400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421c042c0>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 15 03:38:02.112: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4205c4000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/233/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 06:54:31.198: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a9d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 17 06:20:14.286: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 17 06:51:22.752: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222f7400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32467 #36276

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-8bb8-pvc-f1c4bc09-dcbd-11e6-8079-42010af00035  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/241/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.errorString | 0xc421f1c210>: {
        s: "Unable to get server version: Get https://35.184.80.248/version: dial tcp 35.184.80.248:443: i/o timeout",
    }
    Unable to get server version: Get https://35.184.80.248/version: dial tcp 35.184.80.248:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:218

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421911970>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-905hf is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-905hf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203a9220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28071

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203a9220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227cb740>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-905hf is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-905hf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: DiffResources {e2e.go}

Error: 19 leaked resources
[ instance-templates ]
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-f92015ae  n1-standard-2               2017-01-19T13:57:59.107-08:00
[ instance-groups ]
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-f92015ae-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
[ instances ]
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-f92015ae-b284  us-central1-f  n1-standard-2               10.240.0.4   35.184.23.195   RUNNING
+gke-bootstrap-e2e-default-pool-f92015ae-dx35  us-central1-f  n1-standard-2               10.240.0.3   104.154.220.24  RUNNING
+gke-bootstrap-e2e-default-pool-f92015ae-glr1  us-central1-f  n1-standard-2               10.240.0.5   104.197.86.61   RUNNING
[ disks ]
+gke-bootstrap-e2e-default-pool-f92015ae-b284                     us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-f92015ae-dx35                     us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-f92015ae-glr1                     us-central1-f  100      pd-standard  READY
[ routes ]
+default-route-cc784a93bddcbdaf                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+default-route-dcc8f6bb3286ffd9                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
[ routes ]
+gke-bootstrap-e2e-38375c4d-9985a1aa-dec1-11e6-8e85-42010af0003b  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-f92015ae-glr1  1000
+gke-bootstrap-e2e-38375c4d-bc2cde83-de92-11e6-be57-42010af00040  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-f92015ae-b284  1000
+gke-bootstrap-e2e-38375c4d-bd034cc7-de92-11e6-be57-42010af00040  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-f92015ae-dx35  1000
[ firewall-rules ]
+gke-bootstrap-e2e-38375c4d-all           bootstrap-e2e  10.72.0.0/14      sctp,tcp,udp,icmp,esp,ah
+gke-bootstrap-e2e-38375c4d-ssh           bootstrap-e2e  35.184.80.248/32  tcp:22                                  gke-bootstrap-e2e-38375c4d-node
+gke-bootstrap-e2e-38375c4d-vms           bootstrap-e2e  10.240.0.0/16     tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-38375c4d-node

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203a9220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #35790

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a54e30>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-905hf is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-905hf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Expected error:
    <*url.Error | 0xc4224c7c50>: {
        Op: "Get",
        URL: "https://35.184.80.248/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-905hf/replicationcontrollers/rc",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 80, 248],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://35.184.80.248/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-905hf/replicationcontrollers/rc: dial tcp 35.184.80.248:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:250

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42220da40>: {
        s: "Namespace e2e-tests-network-partition-rgvsp is active",
    }
    Namespace e2e-tests-network-partition-rgvsp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] Mesos schedules pods annotated with roles on correct slaves {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203a9220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203a9220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #32945

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Pod was not deleted during network partition.
Expected
    <*url.Error | 0xc42262c570>: {
        Op: "Get",
        URL: "https://35.184.80.248/api/v1/namespaces/e2e-tests-network-partition-rgvsp/pods?labelSelector=name%3Dmy-hostname-net",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 80, 248],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
to equal
    <*errors.errorString | 0xc4203a9220>: {
        s: "timed out waiting for the condition",
    }
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:347

Issues about this test specifically: #37479

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203a9220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d1d9b0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-905hf is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-905hf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/260/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421671750>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42266aa80>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42166b940>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422397220>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f98d00>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42209fb60>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42227af90>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*url.Error | 0xc4217993e0>: {
        Op: "Get",
        URL: "https://35.184.84.60/api/v1/namespaces/e2e-tests-services-72zfd/services/service2",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 84, 60],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.84.60/api/v1/namespaces/e2e-tests-services-72zfd/services/service2: dial tcp 35.184.84.60:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:444

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420a3df60>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42209eb80>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42295db40>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219587f0>: {
        s: "Namespace e2e-tests-services-72zfd is active",
    }
    Namespace e2e-tests-services-72zfd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/262/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422cf87f0>: {
        s: "Namespace e2e-tests-services-3rpbj is active",
    }
    Namespace e2e-tests-services-3rpbj is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b6e5c0>: {
        s: "Namespace e2e-tests-services-3rpbj is active",
    }
    Namespace e2e-tests-services-3rpbj is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42296a1f0>: {
        s: "Namespace e2e-tests-services-3rpbj is active",
    }
    Namespace e2e-tests-services-3rpbj is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4230b5760>: {
        s: "Namespace e2e-tests-services-3rpbj is active",
    }
    Namespace e2e-tests-services-3rpbj is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212dd710>: {
        s: "Namespace e2e-tests-services-3rpbj is active",
    }
    Namespace e2e-tests-services-3rpbj is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 21 06:12:32.239: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422cf8df0>: {
        s: "Namespace e2e-tests-services-3rpbj is active",
    }
    Namespace e2e-tests-services-3rpbj is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4221200b0>: {
        s: "error while stopping RC: service2: Get https://104.197.203.254/api/v1/namespaces/e2e-tests-services-3rpbj/replicationcontrollers/service2: dial tcp 104.197.203.254:443: getsockopt: connection refused",
    }
    error while stopping RC: service2: Get https://104.197.203.254/api/v1/namespaces/e2e-tests-services-3rpbj/replicationcontrollers/service2: dial tcp 104.197.203.254:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422641000>: {
        s: "Namespace e2e-tests-services-3rpbj is active",
    }
    Namespace e2e-tests-services-3rpbj is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222a2900>: {
        s: "Namespace e2e-tests-services-3rpbj is active",
    }
    Namespace e2e-tests-services-3rpbj is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4205cd650>: {
        s: "Namespace e2e-tests-services-3rpbj is active",
    }
    Namespace e2e-tests-services-3rpbj is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4229dea60>: {
        s: "Namespace e2e-tests-services-3rpbj is active",
    }
    Namespace e2e-tests-services-3rpbj is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/264/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 21 22:43:25.573: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225ca278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 21 22:50:15.186: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42185d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:148
Expected error:
    <*errors.errorString | 0xc42034ac90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:122

Issues about this test specifically: #31428

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 21 23:17:29.330: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42283e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38439

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 21 22:57:23.028: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422402c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 21 23:20:42.929: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a0ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 21 18:51:00.683: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 21 22:53:58.902: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4231a4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/266/
Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-aa13-pvc-aee2a87e-e0f9-11e6-a127-42010af00026  us-central1-f  2        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc422d58f10>: {
        s: "expected pod \"pod-e5d85207-e0e1-11e6-81a5-0242ac110002\" success: gave up waiting for pod 'pod-e5d85207-e0e1-11e6-81a5-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-e5d85207-e0e1-11e6-81a5-0242ac110002" success: gave up waiting for pod 'pod-e5d85207-e0e1-11e6-81a5-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37071

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85
Expected error:
    <*errors.errorString | 0xc422258ac0>: {
        s: "expected pod \"pod-4cd83dcf-e0ed-11e6-81a5-0242ac110002\" success: gave up waiting for pod 'pod-4cd83dcf-e0ed-11e6-81a5-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-4cd83dcf-e0ed-11e6-81a5-0242ac110002" success: gave up waiting for pod 'pod-4cd83dcf-e0ed-11e6-81a5-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34658

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 22 14:33:36.905: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
Expected error:
    <*errors.errorString | 0xc42272d740>: {
        s: "expected pod \"pod-6e3a37aa-e0d3-11e6-81a5-0242ac110002\" success: gave up waiting for pod 'pod-6e3a37aa-e0d3-11e6-81a5-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-6e3a37aa-e0d3-11e6-81a5-0242ac110002" success: gave up waiting for pod 'pod-6e3a37aa-e0d3-11e6-81a5-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc422bcc060>: {
        s: "expected pod \"pod-cdb0da0e-e0e0-11e6-81a5-0242ac110002\" success: gave up waiting for pod 'pod-cdb0da0e-e0e0-11e6-81a5-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-cdb0da0e-e0e0-11e6-81a5-0242ac110002" success: gave up waiting for pod 'pod-cdb0da0e-e0e0-11e6-81a5-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #30851

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/268/
Multiple broken tests:

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 03:03:10.674: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421010ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428 #40256

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 02:56:46.371: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f944f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 01:23:23.174: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210018f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 04:32:22.592: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216824f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35277

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 05:29:31.307: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213064f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc42136aab0>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:308

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
Expected
    <*errors.errorString | 0xc4203acd10>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:147

Issues about this test specifically: #31873

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 05:49:29.215: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421174ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:246
Jan 23 01:42:14.978: Pods on node gke-bootstrap-e2e-default-pool-ee2837ac-b6hv are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:182

Issues about this test specifically: #36794

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-6341-pvc-9f39eaa7-e144-11e6-b886-42010af00003  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 23 00:35:53.349: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 07:09:46.194: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dd2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 05:39:18.070: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217024f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 05:32:46.481: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421010ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] DisruptionController should create a PodDisruptionBudget {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 02:50:07.479: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b7b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37017

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 02:14:45.756: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206a98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:141
Expected error:
    <*errors.errorString | 0xc4203acd10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:128

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 01:57:03.106: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210018f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 05:26:05.733: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42047eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 02:59:59.630: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201b38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214bb720>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203acd10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 05:42:44.740: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211664f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b49af0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-zqs0b gke-bootstrap-e2e-default-pool-ee2837ac-l6gh Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:24:59 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:25:18 -0800 PST ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:24:59 -0800 PST  }]\nkube-dns-4101612645-7pr7l        gke-bootstrap-e2e-default-pool-ee2837ac-b6hv Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:24:27 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:25:07 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:24:26 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-zqs0b gke-bootstrap-e2e-default-pool-ee2837ac-l6gh Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:24:59 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:25:18 -0800 PST ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:24:59 -0800 PST  }]
    kube-dns-4101612645-7pr7l        gke-bootstrap-e2e-default-pool-ee2837ac-b6hv Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:24:27 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:25:07 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-22 23:24:26 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4203acd10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1730

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 02:46:41.068: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c7d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Expected error:
    <*errors.errorString | 0xc4203acd10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 02:53:18.813: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42158cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:507
Expected error:
    <*errors.errorString | 0xc4203acd10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #38308

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203acd10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 02:36:44.828: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42154b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 04:28:54.914: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217104f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34520

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Jan 22 23:54:21.423: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 05:36:04.533: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201b38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 04:35:35.784: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420288ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212e21d0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:141
Expected error:
    <*errors.errorString | 0xc4203acd10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:103

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 03:06:21.677: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d2a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 05:19:38.938: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218e8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 05:22:52.296: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421682ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 02:39:56.508: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202898f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 23 05:46:13.046: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42047eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/274/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 04:54:44.570: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e4e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:05:48.824: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4200f4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 03:19:25.036: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ba5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:36:17.004: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421071678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:32:24.789: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422118c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:12:16.393: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422b77678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 03:38:16.219: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210bec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34827

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 04:02:30.517: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210be278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:48:50.446: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42247d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:58:47.584: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42225d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33008

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:55:34.785: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210bec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38511

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:17:37.797: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4229f4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:02:19.578: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422b22278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:39:48.120: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421345678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 04:51:31.116: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ce2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35790

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:43:24.082: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42250f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213d4600>: {
        s: "4 / 13 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-0c9846d6-88ss gke-bootstrap-e2e-default-pool-0c9846d6-88ss Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:14:40 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:31:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:31:11 -0800 PST  }]\nkube-dns-4101612645-rkpf9                                          gke-bootstrap-e2e-default-pool-0c9846d6-88ss Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:48:29 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:32:01 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:48:29 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-0c9846d6-88ss            gke-bootstrap-e2e-default-pool-0c9846d6-88ss Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:14:40 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:31:14 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:31:11 -0800 PST  }]\nl7-default-backend-2234341178-bpvv6                                gke-bootstrap-e2e-default-pool-0c9846d6-88ss Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:15:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:31:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:15:51 -0800 PST  }]\n",
    }
    4 / 13 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-0c9846d6-88ss gke-bootstrap-e2e-default-pool-0c9846d6-88ss Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:14:40 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:31:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:31:11 -0800 PST  }]
    kube-dns-4101612645-rkpf9                                          gke-bootstrap-e2e-default-pool-0c9846d6-88ss Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:48:29 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:32:01 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:48:29 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-0c9846d6-88ss            gke-bootstrap-e2e-default-pool-0c9846d6-88ss Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:14:40 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:31:14 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:31:11 -0800 PST  }]
    l7-default-backend-2234341178-bpvv6                                gke-bootstrap-e2e-default-pool-0c9846d6-88ss Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:15:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-25 01:31:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 23:15:51 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:25:02.561: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224b8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:35:54.491: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421070278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:01:16.286: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421726278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:28:47.538: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cc7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:25:34.210: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210be278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 07:06:14.947: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ad1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 04:19:39.063: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4229c6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29521

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 04:26:27.839: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4229dd678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 03:56:06.216: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422479678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 04:23:09.370: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421727678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:42:20.311: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421902c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28339 #36379

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 07:03:01.765: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220d6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 03:25:49.703: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220f0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:46:36.040: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fccc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:33:03.864: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422323678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36794

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc420388780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc420388780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33285

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc420388780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32830

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:20:49.003: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4229dc278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32936

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 06:22:09.020: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422467678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26134

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 04:45:09.362: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ad1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:28:42.910: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220f1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 05:45:37.447: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420aff678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28346

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 25 03:22:36.323: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4200f4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc420388780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34250

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/279/
Multiple broken tests:

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 22:20:12.124: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f12ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 21:04:25.541: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421abcef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 22:53:02.540: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217c64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 22:13:12.455: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42105e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 20:10:44.526: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421520ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 20:07:31.213: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210aeef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 17:12:55.773: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42125c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 19:39:51.594: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202318f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 20:13:57.777: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420314ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28503

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 23:03:52.349: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420df58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 22:56:19.447: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c7c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc421c983f0>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:308

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 21:14:48.518: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221204f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214efd40>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:33:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8            gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:08 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:33:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8            gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:08 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 19:52:52.822: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42122f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 20:17:23.114: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ab58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:58:09.021: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214b98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36183

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 17:09:40.526: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b86ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 22:09:44.916: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202138f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 22:06:30.892: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211a04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 17:22:02.557: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202318f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 19:56:16.722: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421272ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 21:22:48.341: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215b38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36970

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:19:12.388: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213b58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 22:29:52.155: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225184f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 21:51:44.564: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b30ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:45:02.346: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210aeef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35790

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 19:01:22.226: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209904f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 20:02:42.987: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217caef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38308

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 19:11:41.191: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ec6ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 19:08:23.938: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211664f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213053c0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:33:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8            gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:08 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:33:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8            gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:08 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 19:04:54.633: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bb6ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:25:59.590: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bb8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35579

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:29:16.704: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a684f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc42154a950>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:32:41.653: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420314ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:09:25.445: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420df04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:16:01.275: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214d78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 21:07:40.946: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b318f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 20:28:04.336: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421872ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26134

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42104ad40>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:33:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8            gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:08 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:33:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-93a30d0d-pnt8            gke-bootstrap-e2e-default-pool-93a30d0d-pnt8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:08 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-26 15:32:07 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 19:46:16.160: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216a78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 21:55:28.813: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215db8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 21:18:02.095: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214558f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 22:23:27.543: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214b84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31400

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-93a30d0d-pnt8\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-93a30d0d-pnt8" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 17:16:03.867: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202318f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:54:40.265: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b96ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 23:00:08.783: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213844f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 19:49:39.767: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b878f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:12:42.628: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42152a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203aac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 21:39:14.131: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c724f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 21:11:33.303: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209904f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc4203aac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:06:12.203: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210af8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 19:59:30.502: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213578f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 23:25:06.266: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42243c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37259

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 22:16:28.913: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421272ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 18:22:44.348: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221018f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@fejta fejta closed this as completed Jan 30, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

2 participants