-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes-e2e-gci-gke-subnet: broken test run #33404
Comments
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/2/ Multiple broken tests: Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #27324 Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28462 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/3/ Multiple broken tests: Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28462 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: Test {e2e.go}
Issues about this test specifically: #33361 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/4/ Multiple broken tests: Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28416 #31055 Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}
Issues about this test specifically: #30131 #31402 Failed: Test {e2e.go}
Issues about this test specifically: #33361 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/5/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28462 Failed: Test {e2e.go}
Issues about this test specifically: #33361 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/6/ Multiple broken tests: Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28462 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28416 #31055 Failed: Test {e2e.go}
Issues about this test specifically: #33361 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/7/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
Issues about this test specifically: #26128 #26685 #33408 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28416 #31055 Failed: Test {e2e.go}
Issues about this test specifically: #33361 |
I attempted to triage the following test failure - "[k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster" Based on run Based on the logs, following is my observation: 3 pods got created and they all responded to health checks via API server proxy.
A node was disallowed to access the master and a pod got removed and then a new one was created.
Oddly, one of the pods that never got evicted failed to respond and ends up failing the test
The primary purpose of the test seems to have been satisfied IIUC. The secondary failure needs to be inspected, but is probably better to have a focussed proxy test to further investigate. |
FYI: Following tests failed only on v1.3 HEAD test runs.
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/8/ Multiple broken tests: Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}
Issues about this test specifically: #28426 #32168 Failed: Test {e2e.go}
Issues about this test specifically: #33361 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26134 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28462 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/9/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: Test {e2e.go}
Issues about this test specifically: #33361 Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28462 Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/10/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: Test {e2e.go}
Issues about this test specifically: #33361 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/11/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28416 #31055 Failed: Test {e2e.go}
Issues about this test specifically: #33361 Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
Issues about this test specifically: #26128 #26685 #33408 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/12/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: Test {e2e.go}
Issues about this test specifically: #33361 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28462 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/13/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28416 #31055 Failed: Test {e2e.go}
Issues about this test specifically: #33361 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}
Issues about this test specifically: #32053 #32758 Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}
Issues about this test specifically: #27156 #28979 #30489 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 |
[FLAKE-PING] @bprashanth This flaky-test issue would love to have more attention. |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/16/ Multiple broken tests: Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}
Issues about this test specifically: #30263 Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}
Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}
Issues about this test specifically: #29828 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: DumpClusterLogs {e2e.go}
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26138 #28429 #28737 Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
Issues about this test specifically: #26128 #26685 #33408 Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28503 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}
Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}
Issues about this test specifically: #31502 #32947 Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}
Issues about this test specifically: #30131 #31402 Failed: DiffResources {e2e.go}
Issues about this test specifically: #33373 #33416 Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28493 #29964 Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}
Issues about this test specifically: #31420 Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #28283 Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}
Issues about this test specifically: #28339 Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node capacity {Kubernetes e2e suite}
Issues about this test specifically: #28065 Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}
Issues about this test specifically: #31657 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27532 Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26171 #28188 Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}
Issues about this test specifically: #32371 Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26168 #27450 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}
Issues about this test specifically: #27196 #28998 #32403 #33341 Failed: [k8s.io] Sysctls should support unsafe sysctls which are actually whitelisted {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27360 #28096 #29615 #31775 Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}
Issues about this test specifically: #29976 #30464 #30687 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29831 Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should becomes running {Kubernetes e2e suite}
Issues about this test specifically: #28450 Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26127 #28081 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}
Issues about this test specifically: #28337 Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}
Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}
Issues about this test specifically: #29197 Failed: [k8s.io] Secrets should be consumable from pods in volume with Mode set in the item [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #31969 Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}
Issues about this test specifically: #29066 #30592 #31065 #33171 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29050 Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}
Issues about this test specifically: #28106 Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
Issues about this test specifically: #32023 Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #30851 Failed: [k8s.io] Mesos applies slave attributes as labels {Kubernetes e2e suite}
Issues about this test specifically: #28359 Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}
Issues about this test specifically: #31635 Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}
Issues about this test specifically: #28064 #28569 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}
Issues about this test specifically: #27443 #27835 #28900 #32512 Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #32025 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}
Issues about this test specifically: #26172 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942 Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27673 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}
Issues about this test specifically: #31066 #31967 #32219 #32535 Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}
Issues about this test specifically: #31408 Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set {Kubernetes e2e suite}
Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #30264 Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26425 #26715 #28825 #28880 #32854 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}
Issues about this test specifically: #27524 #32057 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}
Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}
Issues about this test specifically: #28006 #28866 #29613 Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}
Issues about this test specifically: #26129 #32341 Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26490 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29461 Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}
Issues about this test specifically: #29647 Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}
Issues about this test specifically: #31889 Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28584 #32045 Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}
Issues about this test specifically: #31498 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}
Issues about this test specifically: #32053 #32758 Failed: Test {e2e.go}
Issues about this test specifically: #33361 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28774 #31429 Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}
Failed: [k8s.io] Mesos starts static pods on every node in the mesos cluster {Kubernetes e2e suite}
Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #33008 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29834 Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}
Issues about this test specifically: #32034 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26139 #28342 #28439 #31574 Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}
Issues about this test specifically: #28426 #32168 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27014 #27834 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Failed: [k8s.io] Downward API volume should provide container's memory limit {Kubernetes e2e suite}
Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}
Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}
Issues about this test specifically: #32054 Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars {Kubernetes e2e suite}
Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication on a single node {Kubernetes e2e suite}
Issues about this test specifically: #28827 #31867 Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}
Issues about this test specifically: #28371 #29604 Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26728 #28266 #30340 #32405 Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27680 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}
Issues about this test specifically: #27976 #29503 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29710 Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}
Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28346 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}
Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}
Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}
Issues about this test specifically: #29040 Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}
Issues about this test specifically: #31873 Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}
Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}
Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}
Issues about this test specifically: #31085 Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}
Issues about this test specifically: #28773 #29506 #30699 #32734 Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #32122 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29521 Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}
Issues about this test specifically: #27232 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #30981 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}
Issues about this test specifically: #27503 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27507 #28275 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}
|
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/17/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/18/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/19/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/20/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/21/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/22/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/23/ Run so broken it didn't make JUnit output! |
[FLAKE-PING] @bprashanth This flaky-test issue would love to have more attention. |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/31/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/28/ Run so broken it didn't make JUnit output! |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/34/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}
Issues about this test specifically: #30216 #31031 #32086 Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27079 Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}
Issues about this test specifically: #31085 Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}
Issues about this test specifically: #28003 Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}
Issues about this test specifically: #31635 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}
Issues about this test specifically: #28523 Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}
Issues about this test specifically: #29647 Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26134 Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}
Issues about this test specifically: #28337 Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32185 #32372 Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26168 #27450 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}
Issues about this test specifically: #26509 #26834 #29780 Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28503 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #31400 Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}
Issues about this test specifically: #29828 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}
Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}
Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}
Issues about this test specifically: #32584 Failed: [k8s.io] Downward API volume should provide container's memory request {Kubernetes e2e suite}
Issues about this test specifically: #29707 Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #28283 Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
Issues about this test specifically: #32023 Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 Failed: [k8s.io] Generated release_1_2 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}
Issues about this test specifically: #32043 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #32025 Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
Issues about this test specifically: #26128 #26685 #33408 Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}
Issues about this test specifically: #31938 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #30187 Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}
Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #32936 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}
Issues about this test specifically: #31183 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #26127 #28081 Failed: DiffResources {e2e.go}
Issues about this test specifically: #33373 #33416 Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}
Issues about this test specifically: #27023 Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27680 Failed: [k8s.io] Mesos schedules pods annotated with roles on correct slaves {Kubernetes e2e suite}
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
Issues about this test specifically: #32087 Failed: [k8s.io] Proxy version v1 should proxy to cadvisor [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29050 Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}
Issues about this test specifically: #27524 #32057 Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #29752 Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26425 #26715 #28825 #28880 #32854 Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28493 #29964 Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}
Issues about this test specifically: #28348 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26138 #28429 #28737 Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}
Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}
Issues about this test specifically: #31151 Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}
Issues about this test specifically: #30542 #31460 #31479 #31552 #32032 Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}
|
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/36/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/37/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/38/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/39/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/40/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/41/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/42/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/43/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/44/ Run so broken it didn't make JUnit output! |
[FLAKE-PING] @bprashanth This flaky-test issue would love to have more attention. |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/45/ Run so broken it didn't make JUnit output! |
[FLAKE-PING] @bprashanth This flaky-test issue would love to have more attention. |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-subnet/1/
Multiple broken tests:
Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447
Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}
Issues about this test specifically: #30131 #31402
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #28220 #32942
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214
The text was updated successfully, but these errors were encountered: