Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-subnet: broken test run #38582

Closed
k8s-github-robot opened this issue Dec 11, 2016 · 7 comments
Closed

ci-kubernetes-e2e-gci-gke-subnet: broken test run #38582

k8s-github-robot opened this issue Dec 11, 2016 · 7 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-subnet/89/

Multiple broken tests:

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 18 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-ebf9669d  n1-standard-2               2016-12-10T23:04:42.984-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-ebf9669d-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-ebf9669d-kqtk  us-central1-f  n1-standard-2               10.240.0.3   130.211.156.213  RUNNING
+gke-bootstrap-e2e-default-pool-ebf9669d-qjfv  us-central1-f  n1-standard-2               10.240.0.2   104.154.180.232  RUNNING
+gke-bootstrap-e2e-default-pool-ebf9669d-zzp5  us-central1-f  n1-standard-2               10.240.0.5   162.222.179.157  RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-ebf9669d-kqtk  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-ebf9669d-qjfv  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-ebf9669d-zzp5  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-5fcf7d96-0b4df0c8-bf8d-11e6-bc9a-42010af0002f  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-ebf9669d-zzp5  1000
+gke-bootstrap-e2e-5fcf7d96-72f108b8-bf70-11e6-834a-42010af0002f  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-ebf9669d-kqtk  1000
+gke-bootstrap-e2e-5fcf7d96-7328fcc7-bf70-11e6-834a-42010af0002f  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-ebf9669d-qjfv  1000
+gke-bootstrap-e2e-5fcf7d96-all  bootstrap-e2e  10.72.0.0/14      icmp,esp,ah,sctp,tcp,udp
+gke-bootstrap-e2e-5fcf7d96-ssh  bootstrap-e2e  35.184.18.187/32  tcp:22                                  gke-bootstrap-e2e-5fcf7d96-node
+gke-bootstrap-e2e-5fcf7d96-vms  bootstrap-e2e  10.240.0.0/16     tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-5fcf7d96-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 8h0m0s timeout during teardown

Issues about this test specifically: #35658

Previous issues for this suite: #37341 #38356

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Dec 11, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-subnet/102/

Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42038cd20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420459340>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:16, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63617379782, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63617379782, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:16, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63617379782, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63617379782, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.StatusError | 0xc42276f100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-11l0w/services/test-deployment-ctrl/proxy/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10\\\"\") has prevented the request from succeeding (post services test-deployment-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "test-deployment-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-11l0w/services/test-deployment-ctrl/proxy/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-11l0w/services/test-deployment-ctrl/proxy/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10\"") has prevented the request from succeeding (post services test-deployment-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc42038cd20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-subnet/106/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:148
starting pod liveness-exec in namespace e2e-tests-container-probe-f8zs4
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:364

Issues about this test specifically: #37914

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #37056

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:131
Expected error:
    <*errors.errorString | 0xc42121b290>: {
        s: "expected pod \"var-expansion-a4ee3d60-c36c-11e6-9032-0242ac110005\" success: gave up waiting for pod 'var-expansion-a4ee3d60-c36c-11e6-9032-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "var-expansion-a4ee3d60-c36c-11e6-9032-0242ac110005" success: gave up waiting for pod 'var-expansion-a4ee3d60-c36c-11e6-9032-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #28503

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4206850b0>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]\nheapster-v1.2.0-2168613315-dg9fv                                   gke-bootstrap-e2e-default-pool-afde8397-534a Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:55 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  }]\nkube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]
    heapster-v1.2.0-2168613315-dg9fv                                   gke-bootstrap-e2e-default-pool-afde8397-534a Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:55 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  }]
    kube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216d8880>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]\nheapster-v1.2.0-2168613315-dg9fv                                   gke-bootstrap-e2e-default-pool-afde8397-534a Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:55 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  }]\nkube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]
    heapster-v1.2.0-2168613315-dg9fv                                   gke-bootstrap-e2e-default-pool-afde8397-534a Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:55 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  }]
    kube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Dec 16 03:14:04.331: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:141
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:103

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:62
Expected error:
    <*errors.errorString | 0xc4211af200>: {
        s: "expected pod \"downward-api-2d5bd11f-c397-11e6-9032-0242ac110005\" success: gave up waiting for pod 'downward-api-2d5bd11f-c397-11e6-9032-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "downward-api-2d5bd11f-c397-11e6-9032-0242ac110005" success: gave up waiting for pod 'downward-api-2d5bd11f-c397-11e6-9032-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:309
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:283

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421189b00>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]\nheapster-v1.2.0-2168613315-dg9fv                                   gke-bootstrap-e2e-default-pool-afde8397-534a Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:55 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  }]\nkube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]
    heapster-v1.2.0-2168613315-dg9fv                                   gke-bootstrap-e2e-default-pool-afde8397-534a Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:55 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  }]
    kube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42128b940>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]\nkube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]
    kube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:161
Failed waiting for pod wrapped-volume-race-a4bef523-c370-11e6-9032-0242ac110005-1l6jw to enter running state
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:383

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:197
Expected error:
    <*errors.errorString | 0xc421f28ed0>: {
        s: "expected pod \"pod-configmaps-6e226dd1-c39c-11e6-9032-0242ac110005\" success: gave up waiting for pod 'pod-configmaps-6e226dd1-c39c-11e6-9032-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-6e226dd1-c39c-11e6-9032-0242ac110005" success: gave up waiting for pod 'pod-configmaps-6e226dd1-c39c-11e6-9032-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #27079

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:271
Expected
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:260

Issues about this test specifically: #31408

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:136
Expected error:
    <*errors.errorString | 0xc421775420>: {
        s: "gave up waiting for pod 'pvc-volume-tester-tsrdh' to be 'success or failure' after 15m0s",
    }
    gave up waiting for pod 'pvc-volume-tester-tsrdh' to be 'success or failure' after 15m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:232

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:185
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:172

Issues about this test specifically: #28003

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:100
Expected error:
    <*errors.errorString | 0xc421b7f8d0>: {
        s: "expected pod \"var-expansion-f7681131-c39d-11e6-9032-0242ac110005\" success: gave up waiting for pod 'var-expansion-f7681131-c39d-11e6-9032-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "var-expansion-f7681131-c39d-11e6-9032-0242ac110005" success: gave up waiting for pod 'var-expansion-f7681131-c39d-11e6-9032-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214c0ae0>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]\nheapster-v1.2.0-2168613315-dg9fv                                   gke-bootstrap-e2e-default-pool-afde8397-534a Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:55 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  }]\nkube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]
    heapster-v1.2.0-2168613315-dg9fv                                   gke-bootstrap-e2e-default-pool-afde8397-534a Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:55 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  }]
    kube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:82
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:81

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32375

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:121
starting pod liveness-exec in namespace e2e-tests-container-probe-ljqht
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:364

Issues about this test specifically: #30264

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:564
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30263

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc421d9a8b0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]\nkube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]
    kube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:100
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:99

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:83
Expected error:
    <*errors.errorString | 0xc4216bb890>: {
        s: "expected pod \"downward-api-5f266e21-c372-11e6-9032-0242ac110005\" success: gave up waiting for pod 'downward-api-5f266e21-c372-11e6-9032-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected pod "downward-api-5f266e21-c372-11e6-9032-0242ac110005" success: gave up waiting for pod 'downward-api-5f266e21-c372-11e6-9032-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211f9760>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]\nheapster-v1.2.0-2168613315-dg9fv                                   gke-bootstrap-e2e-default-pool-afde8397-534a Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:55 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  }]\nkube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-afde8397-8m61 gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:07:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:29 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:11:22 -0800 PST  }]
    heapster-v1.2.0-2168613315-dg9fv                                   gke-bootstrap-e2e-default-pool-afde8397-534a Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:55 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:36 -0800 PST  }]
    kube-dns-4101612645-z48kq                                          gke-bootstrap-e2e-default-pool-afde8397-8m61 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:37 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 00:09:17 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #36178

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1643

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1643

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <*errors.errorString | 0xc4204514f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1662

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:493
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.142.82 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-j6rfl run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never success -- /bin/sh -c exit 0] []  <nil> Waiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-j6rfl/success to be running, status is Pending, pod ready: false\

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-subnet/107/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421edae10>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422bc3330>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222f6dd0>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4230e1650>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421aae910>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e92e20>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d2e3d0>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422abbb60>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc4217d4690>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 146, 148, 64, 180],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 146.148.64.180:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422cf7cb0>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423382150>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422afbcf0>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420da33f0>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422280390>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d9a390>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2f00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222f7680>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421af5e90>: {
        s: "Namespace e2e-tests-services-gjhzb is active",
    }
    Namespace e2e-tests-services-gjhzb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-subnet/112/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422f9b590>: {
        s: "Namespace e2e-tests-services-mz6bq is active",
    }
    Namespace e2e-tests-services-mz6bq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220844e0>: {
        s: "Namespace e2e-tests-services-mz6bq is active",
    }
    Namespace e2e-tests-services-mz6bq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc4222339a0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 174, 13],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.198.174.13:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422a686b0>: {
        s: "Namespace e2e-tests-services-mz6bq is active",
    }
    Namespace e2e-tests-services-mz6bq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a4cdf0>: {
        s: "Namespace e2e-tests-services-mz6bq is active",
    }
    Namespace e2e-tests-services-mz6bq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4234a66f0>: {
        s: "Namespace e2e-tests-services-mz6bq is active",
    }
    Namespace e2e-tests-services-mz6bq is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-subnet/117/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220713f0>: {
        s: "Namespace e2e-tests-services-l0874 is active",
    }
    Namespace e2e-tests-services-l0874 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ced870>: {
        s: "Namespace e2e-tests-services-l0874 is active",
    }
    Namespace e2e-tests-services-l0874 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e182e0>: {
        s: "Namespace e2e-tests-services-l0874 is active",
    }
    Namespace e2e-tests-services-l0874 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42320a080>: {
        s: "Namespace e2e-tests-services-l0874 is active",
    }
    Namespace e2e-tests-services-l0874 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.StatusError | 0xc42328c280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"execpod-\" is forbidden: client: etcd cluster is unavailable or misconfigured",
            Reason: "Forbidden",
            Details: {Name: "execpod-", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 403,
        },
    }
    pods "execpod-" is forbidden: client: etcd cluster is unavailable or misconfigured
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1635

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422071a80>: {
        s: "Namespace e2e-tests-services-l0874 is active",
    }
    Namespace e2e-tests-services-l0874 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-subnet/120/
Multiple broken tests:

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:37:18.596: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421027678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 14:47:30.939: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213f7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31697 #36574

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 17:33:11.764: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213c4000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30632

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ed2570>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\nheapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    heapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:34:05.137: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209e5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 14:50:43.985: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421426c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32646

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 15:51:38.046: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42143b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4202a59a0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\nheapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    heapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 12:07:59.247: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42187a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 16:50:43.071: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42121f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 17:26:22.648: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a20000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216529b0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\nheapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    heapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:07:05.715: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209e5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 14:13:46.117: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421434278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 16:26:24.335: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42188b400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 15:01:01.377: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421538278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 12:33:39.941: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421624c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 16:54:16.932: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222a4000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38511

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 12:47:14.151: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421930278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 15:44:58.249: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421426278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 12:04:24.244: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42233c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:58:08.977: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209e5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d390e0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\nheapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    heapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:13:55.727: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222ad678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 17:36:35.084: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421426000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420350370>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a10270>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\nheapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    heapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42210abe0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\nheapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    heapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 17:44:30.779: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dd5400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 16:44:14.342: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217e7400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 17:29:39.442: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f9a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 17:41:17.314: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218fe000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:51:29.618: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421628278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc421d2a020>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:308

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 12:18:00.564: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218b2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 15:07:26.002: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42173cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 17:10:03.085: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a20000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 15:41:46.908: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421734278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36970

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 12:41:24.987: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421014c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 15:36:51.165: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dd5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37525

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222b4a50>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\nheapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-506b636a-nxjh gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    heapster-v1.2.0-2168613315-6n1vh                                   gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:44:36 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-506b636a-nxjh            gke-bootstrap-e2e-default-pool-506b636a-nxjh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:47 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-20 11:42:46 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 16:57:37.185: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209e4000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 14:10:34.852: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42126e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 15:23:55.126: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a02c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 12:11:29.273: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218cb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc420350370>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 12:38:11.770: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421426c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:17:13.062: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209e5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:30:06.459: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ebac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 14:53:57.924: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421be0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31873

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:03:04.001: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421426c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 12:50:36.704: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213f6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:41:39.522: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d31678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc420350370>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 14:16:59.492: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42150b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 13:48:16.022: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223da278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 16:47:25.922: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a1f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 14:57:27.579: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42176a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 17:13:19.941: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219ea000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 20 12:14:47.399: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a1f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-subnet/121/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422047290>: {
        s: "Namespace e2e-tests-services-5d3s7 is active",
    }
    Namespace e2e-tests-services-5d3s7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216cf440>: {
        s: "Namespace e2e-tests-services-5d3s7 is active",
    }
    Namespace e2e-tests-services-5d3s7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f956e0>: {
        s: "Namespace e2e-tests-services-5d3s7 is active",
    }
    Namespace e2e-tests-services-5d3s7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421748c50>: {
        s: "Namespace e2e-tests-services-5d3s7 is active",
    }
    Namespace e2e-tests-services-5d3s7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f94220>: {
        s: "Namespace e2e-tests-services-5d3s7 is active",
    }
    Namespace e2e-tests-services-5d3s7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*url.Error | 0xc4227cab70>: {
        Op: "Post",
        URL: "https://104.154.197.137/api/v1/namespaces/e2e-tests-services-5d3s7/pods",
        Err: {s: "unexpected EOF"},
    }
    Post https://104.154.197.137/api/v1/namespaces/e2e-tests-services-5d3s7/pods: unexpected EOF
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1635

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225f1710>: {
        s: "Namespace e2e-tests-services-5d3s7 is active",
    }
    Namespace e2e-tests-services-5d3s7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216a7670>: {
        s: "Namespace e2e-tests-services-5d3s7 is active",
    }
    Namespace e2e-tests-services-5d3s7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422178fd0>: {
        s: "Namespace e2e-tests-services-5d3s7 is active",
    }
    Namespace e2e-tests-services-5d3s7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224e5cb0>: {
        s: "Namespace e2e-tests-services-5d3s7 is active",
    }
    Namespace e2e-tests-services-5d3s7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

3 participants