Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gke-serial: broken test run #43550

Closed
k8s-github-robot opened this issue Mar 23, 2017 · 75 comments
Closed

ci-kubernetes-e2e-gke-serial: broken test run #43550

k8s-github-robot opened this issue Mar 23, 2017 · 75 comments
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/994/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc4211d5050>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1509
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421341c00>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420c30e40>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:41
Mar 22 19:06:43.226: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421944910>: {
        s: "6 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-knf0f      gke-bootstrap-e2e-default-pool-7d442a9d-9tv7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  }]\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    6 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-knf0f      gke-bootstrap-e2e-default-pool-7d442a9d-9tv7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  }]
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420417400>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc4213e03d0>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #36914

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:51
Mar 22 14:49:15.732: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Mar 22 15:30:55.371: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420e43eb0>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #35279

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420ba9a00>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc4226cb070>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:244
Expected error:
    <*errors.errorString | 0xc42170b6b0>: {
        s: "6 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-knf0f      gke-bootstrap-e2e-default-pool-7d442a9d-9tv7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  }]\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    6 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-knf0f      gke-bootstrap-e2e-default-pool-7d442a9d-9tv7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  }]
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:241

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:407
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Issues about this test specifically: #37373

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:44
Mar 22 17:16:41.156: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:238
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc42124a010>: {
        s: "6 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-knf0f      gke-bootstrap-e2e-default-pool-7d442a9d-9tv7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  }]\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    6 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-knf0f      gke-bootstrap-e2e-default-pool-7d442a9d-9tv7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 17:58:50 -0700 PDT  }]
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]
    kubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]
    l7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420e43700>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]\nkube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-9mt6k  gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  }]\nkubernetes-dashboard-2917854236-5lt62 gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:24 -0700 PDT  }]\nl7-default-backend-1044750973-2gn5v   gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:22 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-5crmf              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:36 -0700 PDT  }]
    kube-dns-806549836-qx5fw              gke-bootstrap-e2e-default-pool-7d442a9d-qx8w Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:13:27 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 14:14:19 -0700 PDT ContainersNotReady containers with unready status: [ku
@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Mar 23, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/997/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421ac62f0>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\nkubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    kubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421bfa920>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\nkubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    kubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:51
Mar 23 07:47:29.336: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420f23970>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\nkubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    kubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421c9b920>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\nkubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    kubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc4212abf10>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\nkubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    kubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc42026d650>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\nkubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    kubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #35279

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:65
Mar 23 08:37:56.898: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:54
Mar 23 07:22:43.030: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc422104450>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\nkubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-z2pb3      gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    kubernetes-dashboard-2917854236-f100k gke-bootstrap-e2e-default-pool-8a1675df-xp8c Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:05:03 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-23 07:04:01 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #28071

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/rescheduler.go:74
Expected error:
    <*errors.errorString | 0xc42165c090>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/rescheduler.go:72

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

@calebamiles calebamiles modified the milestones: v1.6, v1.6.1 Mar 23, 2017
@grodrigues3
Copy link
Contributor

@davidopp is this scheduler related? Can you PTAL and assign or close?

@davidopp
Copy link
Member

I don't know, and I can't look at it for a few hours.

@kubernetes/sig-scheduling-bugs can someone from SIG scheduling please take a look at this?

@bsalamat
Copy link
Member

Almost all of the failures in Scheduler predicates tests happen in preparation phase of the tests and in "BeforeEach" while the test is waiting for system pods to be ready. So, they don't seem to be scheduler related, but I will investigate further.

@bsalamat
Copy link
Member

I see a few different errors; among them I see many errors like this:
docker_sandbox.go:263] NetworkPlugin kubenet failed on the status hook for pod...

Unfortunately, I know nothing about docker_sandbox. Would be great if people with more familiarity could take a look and tell us if it is the root cause or is it triggered by other issues in the system.
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/994/nodelog?junit=junit_01.xml&wrap=on

@grodrigues3
Copy link
Contributor

@yujuhong @bowei I'm not sure if this is network or node related. Can one of you PTAL?

@grodrigues3 grodrigues3 added sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node. labels Mar 23, 2017
@dashpole
Copy link
Contributor

This is from the most recent test run on this node: Conditions:[]v1.NodeCondition{v1.NodeCondition{Type:"NetworkUnavailable", Status:"True", LastHeartbeatTime:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625874458, nsec:754338394, loc:(*time.Location)(0x4e5a060)}}, Reason:"NoRouteCreated", Message:"Node created without a route"}

I also see lots of errors like this: container start failed: ImagePullBackOff: Back-off pulling image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0"

Other errors that may be relevant:
cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
iptables.go:175] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
2017/03/23 14:01:19 Error retriving last reserved ip: Failed to retrieve last reserved ip: open /var/lib/cni/networks/kubenet/last_reserved_ip: no such file or directory

@yujuhong
Copy link
Contributor

yujuhong commented Mar 23, 2017

In https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/997/

heapster wouldn't come up.

E0323 14:05:29.936795    2536 pod_workers.go:182] Error syncing pod 91493743-0fd1-11e7-b9ee-42010af0002a ("heapster-v1.3.0-1288166888-z2pb3_kube-system(91493743-0fd1-11e7-b9ee-42010af0002a)"), skipping: [failed to "StartContainer" for "heapster" with ErrImagePull: "rpc error: code = 2 desc = failed to register layer: rename /var/lib/docker/image/aufs/layerdb/tmp/layer-399932323 /var/lib/docker/image/aufs/layerdb/sha256/ea2709de02c6853178c070f1ef29ac638c88769f9337d2f500281f34e776a35e: directory not empty"
, failed to "StartContainer" for "heapster-nanny" with ErrImagePull: "rpc error: code = 2 desc = failed to register layer: rename /var/lib/docker/image/aufs/layerdb/tmp/layer-320119496 /var/lib/docker/image/aufs/layerdb/sha256/38ac8d0f5bb30c8b742ad97a328b77870afaec92b33faf7e121161bc78a3fec8: directory not empty"

Known docker image issue triggered by one of the disruptive (node restart) test. The fix is in docker 1.13 moby/moby#25523

EDIT: ref: #41007

@aveshagarwal
Copy link
Member

logs have around 5 panics:

runtime error: invalid memory address or nil pointer dereference

@yujuhong
Copy link
Contributor

logs have around 5 panics:

@aveshagarwal which log is this?

@aveshagarwal
Copy link
Member

@aveshagarwal
Copy link
Member

I agree with @bsalamat several tests are failing in BeforeEach as not all pods are in running and ready state. Still going through the logs why is this happening though.

@aveshagarwal
Copy link
Member

I0322 14:56:12.493] 
I0322 14:56:12.493]   �[91m�[1mTest Panicked�[0m
I0322 14:56:12.493]   �[91mruntime error: invalid memory address or nil pointer dereference�[0m
I0322 14:56:12.493]   /usr/local/go/src/runtime/asm_amd64.s:479
I0322 14:56:12.493] 
I0322 14:56:12.494]   �[91mFull Stack Trace�[0m
I0322 14:56:12.494]   	/usr/local/go/src/runtime/panic.go:458 +0x243
I0322 14:56:12.494]   k8s.io/kubernetes/test/e2e/common.(*ResourceConsumer).sendConsumeCPURequest.func1(0x290e560, 0xc42107f3e0, 0xc421cf3d07)
I0322 14:56:12.494]   	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:231 +0x63
I0322 14:56:12.494]   k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc421bea2a0, 0xc4215105a0, 0xc42001f800, 0x2f6f9a3, 0xa)
I0322 14:56:12.494]   	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:271 +0x70
I0322 14:56:12.494]   k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc421bea2a0, 0xc4215105a0, 0x0, 0xc421bea2a0)
I0322 14:56:12.495]   	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:187 +0x41
I0322 14:56:12.495]   k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc421bea2a0, 0xc4215105a0, 0xc421bea2a0, 0xc4215105a0)
I0322 14:56:12.495]   	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:212 +0x5a
I0322 14:56:12.495]   k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x37e11d600, 0x1bf08eb000, 0xc4215105a0, 0x487ae20, 0xc420c950e0)
I0322 14:56:12.495]   	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:201 +0x4d
I0322 14:56:12.495]   k8s.io/kubernetes/test/e2e/common.(*ResourceConsumer).sendConsumeCPURequest(0xc4211f6750, 0xfa)
I0322 14:56:12.495]   	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:244 +0x1d0
I0322 14:56:12.495]   k8s.io/kubernetes/test/e2e/common.(*ResourceConsumer).makeConsumeCPURequests(0xc4211f6750)
I0322 14:56:12.496]   	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:175 +0x36c
I0322 14:56:12.496]   created by k8s.io/kubernetes/test/e2e/common.newResourceConsumer
I0322 14:56:12.496]   	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:137 +0x399

@ixdy ixdy removed their assignment Mar 24, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1007/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 25 10:34:52.337: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-845e497d-z3cg"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:65
Expected error:
    <*errors.errorString | 0xc420d80ff0>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:422

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 25 10:47:18.867: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-845e497d-z3cg"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:237
Mar 25 11:25:48.235: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-845e497d-z3cg"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-845e497d-z3cg\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-845e497d-z3cg" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 25 09:50:07.231: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-845e497d-z3cg"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 25 10:58:47.725: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-845e497d-z3cg"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 25 09:18:51.281: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-845e497d-z3cg"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #37373

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 25 09:32:05.175: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-845e497d-z3cg"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420a314e0>: {
        s: "4 / 14 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                     NODE                                         PHASE   GRACE CONDITIONS\nfluentd-gcp-v2.0-wn7mh                                  gke-bootstrap-e2e-default-pool-845e497d-z3cg Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:35 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  }]\nheapster-v1.3.0-1288166888-7t847                        gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:39 -0700 PDT  }]\nkube-dns-806549836-r82m6                                gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:53 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-845e497d-z3cg gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:17:48 -0700 PDT  }]\n",
    }
    4 / 14 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                     NODE                                         PHASE   GRACE CONDITIONS
    fluentd-gcp-v2.0-wn7mh                                  gke-bootstrap-e2e-default-pool-845e497d-z3cg Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:35 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  }]
    heapster-v1.3.0-1288166888-7t847                        gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:39 -0700 PDT  }]
    kube-dns-806549836-r82m6                                gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:53 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-845e497d-z3cg gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:17:48 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc422484c70>: {
        s: "4 / 14 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                     NODE                                         PHASE   GRACE CONDITIONS\nfluentd-gcp-v2.0-wn7mh                                  gke-bootstrap-e2e-default-pool-845e497d-z3cg Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:35 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  }]\nheapster-v1.3.0-1288166888-7t847                        gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:39 -0700 PDT  }]\nkube-dns-806549836-r82m6                                gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:53 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-845e497d-z3cg gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:17:48 -0700 PDT  }]\n",
    }
    4 / 14 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                     NODE                                         PHASE   GRACE CONDITIONS
    fluentd-gcp-v2.0-wn7mh                                  gke-bootstrap-e2e-default-pool-845e497d-z3cg Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:35 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  }]
    heapster-v1.3.0-1288166888-7t847                        gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:39 -0700 PDT  }]
    kube-dns-806549836-r82m6                                gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:53 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-845e497d-z3cg gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:17:48 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421919aa0>: {
        s: "4 / 14 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                     NODE                                         PHASE   GRACE CONDITIONS\nfluentd-gcp-v2.0-wn7mh                                  gke-bootstrap-e2e-default-pool-845e497d-z3cg Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:35 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  }]\nheapster-v1.3.0-1288166888-7t847                        gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:39 -0700 PDT  }]\nkube-dns-806549836-r82m6                                gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:53 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-845e497d-z3cg gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:17:48 -0700 PDT  }]\n",
    }
    4 / 14 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                     NODE                                         PHASE   GRACE CONDITIONS
    fluentd-gcp-v2.0-wn7mh                                  gke-bootstrap-e2e-default-pool-845e497d-z3cg Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:35 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:16:33 -0700 PDT  }]
    heapster-v1.3.0-1288166888-7t847                        gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:28:39 -0700 PDT  }]
    kube-dns-806549836-r82m6                                gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:53 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:52:44 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-845e497d-z3cg gke-bootstrap-e2e-default-pool-845e497d-z3cg Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 07:31:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 08:17:48 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1008/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420bd1170>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-fx4bc             gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:43:43 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:47 -0700 PDT  }]\nkube-dns-806549836-v4vq9             gke-bootstrap-e2e-default-pool-f93dd5ec-8lsv Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:43:45 -0700 PDT ContainersNotReady containers with unready status: [kubedns sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:35 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-jh9q2 gke-bootstrap-e2e-default-pool-f93dd5ec-8lsv Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:43:45 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:36 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-fx4bc             gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:43:43 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:47 -0700 PDT  }]
    kube-dns-806549836-v4vq9             gke-bootstrap-e2e-default-pool-f93dd5ec-8lsv Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:43:45 -0700 PDT ContainersNotReady containers with unready status: [kubedns sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:35 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-jh9q2 gke-bootstrap-e2e-default-pool-f93dd5ec-8lsv Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:36 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:43:45 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 11:42:36 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #34223

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:244
Expected error:
    <*errors.errorString | 0xc420e0e040>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:241

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc4213b53c0>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420d7c6e0>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421ccc150>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421692870>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Issues about this test specifically: #35279

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:41
Mar 25 12:12:07.644: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc4207a2160>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421b6ea30>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc4224e7bc0>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420a2a790>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #28019

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:244
Expected error:
    <*errors.errorString | 0xc421cccaa0>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:241

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420e50ed0>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420c52c30>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc42224c7a0>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-806549836-qgr0p gke-bootstrap-e2e-default-pool-f93dd5ec-xxpj Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 12:24:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1012/
Multiple broken tests:

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:125
Mar 26 10:17:56.754: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:94

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:244
Expected error:
    <*errors.errorString | 0xc4223d84e0>: {
        s: "4 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]\nkube-dns-806549836-5jm5z              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]\nkubernetes-dashboard-2917854236-f5wvv gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  }]\n",
    }
    4 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]
    kube-dns-806549836-5jm5z              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]
    kubernetes-dashboard-2917854236-f5wvv gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:241

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:41
Mar 26 10:55:33.691: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:65
Mar 26 09:52:37.927: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Issues about this test specifically: #36914

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:51
Mar 26 11:13:11.063: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:251
Mar 26 12:02:49.877: Pods on node gke-bootstrap-e2e-default-pool-743dbcf3-nr19 are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:187

Issues about this test specifically: #36794

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421078120>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]\nkube-dns-806549836-1956n              gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  }]\nkube-dns-806549836-k6c83              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]\nkubernetes-dashboard-2917854236-n74v4 gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]
    kube-dns-806549836-1956n              gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  }]
    kube-dns-806549836-k6c83              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]
    kubernetes-dashboard-2917854236-n74v4 gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc4202c90c0>: {
        s: "4 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]\nkube-dns-806549836-k6c83              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]\nkubernetes-dashboard-2917854236-n74v4 gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  }]\n",
    }
    4 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]
    kube-dns-806549836-k6c83              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]
    kubernetes-dashboard-2917854236-n74v4 gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #28019

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:244
Expected error:
    <*errors.errorString | 0xc421e48870>: {
        s: "4 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]\nkube-dns-806549836-5jm5z              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]\nkubernetes-dashboard-2917854236-f5wvv gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  }]\n",
    }
    4 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]
    kube-dns-806549836-5jm5z              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]
    kubernetes-dashboard-2917854236-f5wvv gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 11:23:57 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:241

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc42196a510>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]\nkube-dns-806549836-1956n              gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  }]\nkube-dns-806549836-k6c83              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]\nkubernetes-dashboard-2917854236-n74v4 gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]
    kube-dns-806549836-1956n              gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  }]
    kube-dns-806549836-k6c83              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]
    kubernetes-dashboard-2917854236-n74v4 gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421f705b0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-8fw7k     gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]\nkube-dns-806549836-nxv06             gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-263sl gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-8fw7k     gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]
    kube-dns-806549836-nxv06             gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-263sl gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421b2df70>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-8fw7k     gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]\nkube-dns-806549836-nxv06             gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-263sl gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-8fw7k     gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]
    kube-dns-806549836-nxv06             gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-263sl gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc4210376f0>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]\nkube-dns-806549836-1956n              gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  }]\nkube-dns-806549836-k6c83              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]\nkubernetes-dashboard-2917854236-n74v4 gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]
    kube-dns-806549836-1956n              gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  }]
    kube-dns-806549836-k6c83              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]
    kubernetes-dashboard-2917854236-n74v4 gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc421b43980>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-8fw7k     gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]\nkube-dns-806549836-nxv06             gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-263sl gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-8fw7k     gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]
    kube-dns-806549836-nxv06             gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 09:29:17 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-263sl gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:103
Expected error:
    <*errors.errorString | 0xc420484690>: {
        s: "5 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]\nkube-dns-806549836-1956n              gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  }]\nkube-dns-806549836-k6c83              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  }]\nkube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]\nkubernetes-dashboard-2917854236-n74v4 gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  }]\n",
    }
    5 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-8fw7k      gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:00 -0700 PDT  }]
    kube-dns-806549836-1956n              gke-bootstrap-e2e-default-pool-743dbcf3-jqs3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:25 -0700 PDT  }]
    kube-dns-806549836-k6c83              gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq sidecar]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:28 -0700 PDT  }]
    kube-dns-autoscaler-2528518105-263sl  gke-bootstrap-e2e-default-pool-743dbcf3-nr19 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:26 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:26 -0700 PDT  }]
    kubernetes-dashboard-2917854236-n74v4 gke-bootstrap-e2e-default-pool-743dbcf3-vz5p Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:37:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-26 07:36:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:96

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Mar 26 08:35:57.354: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:44
Mar 26 08:01:11.370: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1034/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-2f57c491-cvrk
to equal
    <string>: gke-bootstrap-e2e-default-pool-2f57c491-pngd
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-2f57c491-8zj3
not to equal
    <string>: gke-bootstrap-e2e-default-pool-2f57c491-8zj3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: ListResources After {e2e.go}

Failed to list resources (error during ./cluster/gce/list-resources.sh: signal: interrupt):
Project: jenkins-gke-e2e-serial
Region: us-central1
Zone: us-central1-f
Instance prefix: gke-bootstrap-e2e
Network: bootstrap-e2e
Provider: gke


[ instance-templates ]

Issues about this test specifically: #42073

@bsalamat
Copy link
Member

Looks like some of the newly added SchedulerPriorities tests fail in gke-serial. I will try to take a look tomorrow.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1051/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: ListResources After {e2e.go}

Failed to list resources (error during ./cluster/gce/list-resources.sh: signal: interrupt):
Project: jenkins-gke-e2e-serial
Region: us-central1
Zone: us-central1-f
Instance prefix: gke-bootstrap-e2e
Network: bootstrap-e2e
Provider: gke


[ instance-templates ]

Issues about this test specifically: #42073 #43959

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-62ef0f3f-c2r0
not to equal
    <string>: gke-bootstrap-e2e-default-pool-62ef0f3f-c2r0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-62ef0f3f-c2r0
to equal
    <string>: gke-bootstrap-e2e-default-pool-62ef0f3f-fcrf
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1210/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-439dcd97-pljc
to equal
    <string>: gke-bootstrap-e2e-default-pool-439dcd97-xl3z
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-439dcd97-pljc
not to equal
    <string>: gke-bootstrap-e2e-default-pool-439dcd97-pljc
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 18 05:57:31.578: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1211/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 18 09:41:16.263: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-f1de3e12-s3p9
not to equal
    <string>: gke-bootstrap-e2e-default-pool-f1de3e12-s3p9
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-f1de3e12-s3p9
to equal
    <string>: gke-bootstrap-e2e-default-pool-f1de3e12-vd1s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1217/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 19 13:15:15.354: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-7bad1c45-4716
not to equal
    <string>: gke-bootstrap-e2e-default-pool-7bad1c45-4716
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-7bad1c45-4716
to equal
    <string>: gke-bootstrap-e2e-default-pool-7bad1c45-s6hz
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1218/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc42175cf20>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-pt6fr    gke-bootstrap-e2e-default-pool-4b08529b-52qc Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  }]\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-pt6fr    gke-bootstrap-e2e-default-pool-4b08529b-52qc Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  }]
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc421a05b60>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc4216c2040>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:85
Expected error:
    <*errors.errorString | 0xc421d6ac60>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-wgb0s    gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-wgb0s    gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:84

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc4218fc1a0>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #36914

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:219
Apr 19 18:41:03.840: Pods on node gke-bootstrap-e2e-default-pool-4b08529b-52qc are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:155

Issues about this test specifically: #36794

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc4204d2c50>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-pt6fr    gke-bootstrap-e2e-default-pool-4b08529b-52qc Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  }]\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-pt6fr    gke-bootstrap-e2e-default-pool-4b08529b-52qc Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  }]
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc421a70780>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-wgb0s gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-wgb0s gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:85
Expected error:
    <*errors.errorString | 0xc42141c380>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-pt6fr    gke-bootstrap-e2e-default-pool-4b08529b-52qc Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  }]\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-pt6fr    gke-bootstrap-e2e-default-pool-4b08529b-52qc Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:40 -0700 PDT  }]
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:84

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc420ba4370>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc421c2e7f0>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-wgb0s gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-wgb0s gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:85
Expected error:
    <*errors.errorString | 0xc42190c330>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:84

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc421689af0>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc420747390>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-wgb0s gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-wgb0s gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #35279

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:125
Apr 19 19:51:06.992: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:94

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:44
Apr 19 18:32:57.057: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:150
Expected error:
    <*errors.errorString | 0xc4215bdb70>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-wgb0s gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-wgb0s gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:147

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc421231f30>: {
        s: "1 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    1 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:150
Expected error:
    <*errors.errorString | 0xc421a76e60>: {
        s: "2 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.3.0-1288166888-wgb0s    gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]\nl7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]\n",
    }
    2 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.3.0-1288166888-wgb0s    gke-bootstrap-e2e-default-pool-4b08529b-fw57 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 20:43:28 -0700 PDT  }]
    l7-default-backend-1044750973-w93f7 gke-bootstrap-e2e-default-pool-4b08529b-clz5 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:09:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 18:08:05 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:147

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:41
Apr 19 22:38:58.716: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #30317 #31591 #37163

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1221/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-f74f960b-j3wq
to equal
    <string>: gke-bootstrap-e2e-default-pool-f74f960b-ml64
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-f74f960b-j3wq
not to equal
    <string>: gke-bootstrap-e2e-default-pool-f74f960b-j3wq
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 20 12:33:20.922: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1222/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-ab3ac585-qgp5
not to equal
    <string>: gke-bootstrap-e2e-default-pool-ab3ac585-qgp5
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-ab3ac585-hn4m
to equal
    <string>: gke-bootstrap-e2e-default-pool-ab3ac585-qgp5
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.errorString | 0xc422d61530>: {
        s: "Unable to get server version: Get https://35.184.104.170/version: http2: no cached connection was available",
    }
    Unable to get server version: Get https://35.184.104.170/version: http2: no cached connection was available
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:224

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1223/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-af5cc14e-ct5v
to equal
    <string>: gke-bootstrap-e2e-default-pool-af5cc14e-ftxv
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 20 19:21:10.260: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-af5cc14e-d406
not to equal
    <string>: gke-bootstrap-e2e-default-pool-af5cc14e-d406
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1225/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-33519fa9-csqp
not to equal
    <string>: gke-bootstrap-e2e-default-pool-33519fa9-csqp
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-33519fa9-csqp
to equal
    <string>: gke-bootstrap-e2e-default-pool-33519fa9-rn4d
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 21 06:43:55.477: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1229/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-287c83fc-dzcl
to equal
    <string>: gke-bootstrap-e2e-default-pool-287c83fc-tzl0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 22 01:35:13.616: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-287c83fc-tzl0
not to equal
    <string>: gke-bootstrap-e2e-default-pool-287c83fc-tzl0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1232/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-daf3c69a-21mm
not to equal
    <string>: gke-bootstrap-e2e-default-pool-daf3c69a-21mm
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-daf3c69a-bh2t
to equal
    <string>: gke-bootstrap-e2e-default-pool-daf3c69a-nnxb
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: TearDown {e2e.go}

error during ./hack/e2e-internal/e2e-down.sh: signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1233/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 23 00:36:30.466: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-39b29c99-7xrd
to equal
    <string>: gke-bootstrap-e2e-default-pool-39b29c99-xmm6
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-39b29c99-tqp0
not to equal
    <string>: gke-bootstrap-e2e-default-pool-39b29c99-tqp0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1236/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-20930830-dl35
to equal
    <string>: gke-bootstrap-e2e-default-pool-20930830-z401
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-20930830-dl35
not to equal
    <string>: gke-bootstrap-e2e-default-pool-20930830-dl35
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 23 13:31:04.493: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1239/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-49854df7-q5gf
not to equal
    <string>: gke-bootstrap-e2e-default-pool-49854df7-q5gf
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-49854df7-ppmx
to equal
    <string>: gke-bootstrap-e2e-default-pool-49854df7-q5gf
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 24 04:16:25.234: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1242/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:43
Expected error:
    <*errors.StatusError | 0xc420848000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-dns\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-dns\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-dns\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_common.go:67

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36950

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27957

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29514 #38288

Failed: DumpClusterLogs {e2e.go}

error during ./cluster/log-dump.sh /workspace/_artifacts: exit status 1

Issues about this test specifically: #33722 #37578 #38206 #40455 #42934 #43155 #44504

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37373

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DNS configMap nameserver should be able to change stubDomain configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 24 12:34:32.965: Couldn't delete ns: "e2e-tests-dns-config-map-2l2vb": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-dns-config-map-2l2vb/deployments\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get deployments.apps) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-dns-config-map-2l2vb/deployments\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get deployments.apps)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420ca1860), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] PersistentVolumes:vsphere should test that a file written to the vspehre volume mount before kubelet restart can be read after restart [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: DiffResources {e2e.go}

Error: 27 leaked resources
[ instance-templates ]
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-2c7d17be  n1-standard-2               2017-04-24T11:24:25.047-07:00
[ instance-groups ]
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-2c7d17be-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
[ instances ]
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-2c7d17be-5nt2  us-central1-f  n1-standard-2               10.128.0.3   104.197.98.184  RUNNING
+gke-bootstrap-e2e-default-pool-2c7d17be-60mm  us-central1-f  n1-standard-2               10.128.0.2   35.184.96.108   RUNNING
+gke-bootstrap-e2e-default-pool-2c7d17be-xmcv  us-central1-f  n1-standard-2               10.128.0.4   35.188.120.249  RUNNING
[ disks ]
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-2c7d17be-5nt2  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-2c7d17be-60mm  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-2c7d17be-xmcv  us-central1-f  100      pd-standard  READY
[ routes ]
+default-route-0be778439635e281                                   bootstrap-e2e           10.146.0.0/20                                                                        1000
[ routes ]
+default-route-25b53db66f8a4ace                                   bootstrap-e2e           10.132.0.0/20                                                                        1000
+default-route-2da969fbc0803824                                   bootstrap-e2e           0.0.0.0/0      default-internet-gateway                                              1000
+default-route-5207b7c3de32b9e9                                   bootstrap-e2e           10.140.0.0/20                                                                        1000
+default-route-8c6a8ca69f682db4                                   bootstrap-e2e           10.138.0.0/20                                                                        1000
+default-route-94ed767ef5384a50                                   bootstrap-e2e           10.148.0.0/20                                                                        1000
+default-route-971cb7e2b0dd6dd3                                   bootstrap-e2e           10.128.0.0/20                                                                        1000
+default-route-a7a6e7a525649586                                   bootstrap-e2e           10.142.0.0/20                                                                        1000
[ routes ]
+gke-bootstrap-e2e-2f1de3dd-95f200f2-291b-11e7-b136-42010af00025  bootstrap-e2e           10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-2c7d17be-5nt2  1000
+gke-bootstrap-e2e-2f1de3dd-986e2d41-291b-11e7-b136-42010af00025  bootstrap-e2e           10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-2c7d17be-60mm  1000
+gke-bootstrap-e2e-2f1de3dd-98cf9f23-291b-11e7-b136-42010af00025  bootstrap-e2e           10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-2c7d17be-xmcv  1000
[ firewall-rules ]
+NAME                            NETWORK        SRC_RANGES        RULES                         SRC_TAGS  TARGET_TAGS
+gke-bootstrap-e2e-2f1de3dd-all  bootstrap-e2e  10.72.0.0/14      ah,sctp,tcp,udp,icmp,esp
+gke-bootstrap-e2e-2f1de3dd-ssh  bootstrap-e2e  35.188.93.126/32  tcp:22                                  gke-bootstrap-e2e-2f1de3dd-node
+gke-bootstrap-e2e-2f1de3dd-vms  bootstrap-e2e  10.128.0.0/9      tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-2f1de3dd-node

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:431
Expected error:
    <*errors.StatusError | 0xc4200a0300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-services-m67vt/pods\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-m67vt/pods\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-m67vt/pods\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3965

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29512

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:591
Expected error:
    <*errors.StatusError | 0xc421925880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-volume-provisioning-0d4w4/persistentvolumeclaims/pvc-m1rd4\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get persistentvolumeclaims pvc-m1rd4)",
            Reason: "InternalError",
            Details: {
                Name: "pvc-m1rd4",
                Group: "",
                Kind: "persistentvolumeclaims",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-volume-provisioning-0d4w4/persistentvolumeclaims/pvc-m1rd4\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-volume-provisioning-0d4w4/persistentvolumeclaims/pvc-m1rd4\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get persistentvolumeclaims pvc-m1rd4)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:589

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-2c7d17be-60mm
to equal
    <string>: gke-bootstrap-e2e-default-pool-2c7d17be-xmcv
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc42170a380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-sched-pred-cnb5d/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-sched-pred-cnb5d/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-sched-pred-cnb5d/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc420f31600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-wrapper-zwkz3/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-wrapper-zwkz3/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-wrapper-zwkz3/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37479

Failed: [k8s.io] Daemon set [Serial] should retry creating failed daemon pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 24 12:55:12.585: All nodes should be ready after test, the server has asked for the client to provide credentials (get nodes)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:379

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #34223

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 24 12:33:30.400: Couldn't delete ns: "e2e-tests-namespaces-ssg45": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-namespaces-ssg45/deployments\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-namespaces-ssg45/deployments\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420e84d70), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31407

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35277

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29516

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4203ae110>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1254/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 25 04:30:12.091: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-f28a7627-4zd2
not to equal
    <string>: gke-bootstrap-e2e-default-pool-f28a7627-4zd2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-f28a7627-4zd2
to equal
    <string>: gke-bootstrap-e2e-default-pool-f28a7627-vgw3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1255/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:384
Expected error:
    <*errors.errorString | 0xc4223532b0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:344

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:132
error waiting for daemon pod to start
Expected error:
    <*errors.errorString | 0xc4203ac540>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:121

Issues about this test specifically: #31428

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:431
Expected error:
    <*errors.errorString | 0xc421772f70>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:397

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-70a149e1-nwz9
not to equal
    <string>: gke-bootstrap-e2e-default-pool-70a149e1-nwz9
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421a6c8d0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:323
error waiting for daemon pod to start
Expected error:
    <*errors.errorString | 0xc4203ac540>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:300

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1258/
Multiple broken tests:

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:162
Expected error:
    <*errors.errorString | 0xc420813730>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:161

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc420fa5ac0>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc42118dc90>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:162
Expected error:
    <*errors.errorString | 0xc42224b120>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:161

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc4211dd870>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] NoExecuteTaintManager [Serial] removing taint cancels eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:162
Expected error:
    <*errors.errorString | 0xc421094f60>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:161

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc42165ae70>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:85
Expected error:
    <*errors.errorString | 0xc4222cba80>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:81

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc421f10c20>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc4207c39b0>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc4203696a0>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc4201ea890>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc420faac70>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc420a98b20>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc4216fe190>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:162
Expected error:
    <*errors.errorString | 0xc42131e840>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:161

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:85
Expected error:
    <*errors.errorString | 0xc4211dd0e0>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:81

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:431
Expected error:
    <*url.Error | 0xc422171650>: {
        Op: "Get",
        URL: "https://162.222.183.166/api/v1/namespaces/e2e-tests-services-vmq5c/services/service2",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 162, 222, 183, 166],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://162.222.183.166/api/v1/namespaces/e2e-tests-services-vmq5c/services/service2: dial tcp 162.222.183.166:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:424

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-a94a1e22-t9p1
not to equal
    <string>: gke-bootstrap-e2e-default-pool-a94a1e22-t9p1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc420dc7330>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:85
Expected error:
    <*errors.errorString | 0xc42148fe00>: {
        s: "Namespace e2e-tests-services-vmq5c is active",
    }
    Namespace e2e-tests-services-vmq5c is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:81

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1259/
Multiple broken tests:

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 26 03:03:00.927: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-5c22fb2e-2s4p"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:379

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-5c22fb2e-2s4p
to equal
    <string>: gke-bootstrap-e2e-default-pool-5c22fb2e-v4qj
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 26 02:28:33.187: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-5c22fb2e-2s4p"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:379

Issues about this test specifically: #35277

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:65
Expected error:
    <*errors.errorString | 0xc42122f290>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:422

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc42039ea00>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Daemon set [Serial] should retry creating failed daemon pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:248
error waiting for daemon pod to start
Expected error:
    <*errors.errorString | 0xc4202bcca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:235

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-5c22fb2e-2s4p\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-5c22fb2e-2s4p" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:162
Expected error:
    <*errors.errorString | 0xc42019f410>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:161

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:375
Apr 26 01:44:11.613: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:304

Issues about this test specifically: #37373

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1261/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-cf813684-fv79
to equal
    <string>: gke-bootstrap-e2e-default-pool-cf813684-pxxv
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 26 12:58:39.289: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-cf813684-fv79
not to equal
    <string>: gke-bootstrap-e2e-default-pool-cf813684-fv79
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1263/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 27 00:40:26.180: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-db5e79fd-zbcw
not to equal
    <string>: gke-bootstrap-e2e-default-pool-db5e79fd-zbcw
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-db5e79fd-sxtb
to equal
    <string>: gke-bootstrap-e2e-default-pool-db5e79fd-zbcw
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1264/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc420c75000>: {
        s: "7 / 17 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                     NODE                                         PHASE   GRACE CONDITIONS\nfluentd-gcp-v2.0-28jcb                                  gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:42 -0700 PDT  }]\nkube-dns-806549836-1hvtc                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  }]\nkube-dns-806549836-8tx69                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]\nkube-dns-autoscaler-2925080267-p3vqs                    gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b91df1a0-smbn gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:31 -0700 PDT  }]\nkubernetes-dashboard-2917854236-rd241                   gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]\nl7-default-backend-1044750973-nn64j                     gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  }]\n",
    }
    7 / 17 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                     NODE                                         PHASE   GRACE CONDITIONS
    fluentd-gcp-v2.0-28jcb                                  gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:42 -0700 PDT  }]
    kube-dns-806549836-1hvtc                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  }]
    kube-dns-806549836-8tx69                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]
    kube-dns-autoscaler-2925080267-p3vqs                    gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b91df1a0-smbn gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:31 -0700 PDT  }]
    kubernetes-dashboard-2917854236-rd241                   gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]
    l7-default-backend-1044750973-nn64j                     gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 27 02:42:19.123: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 05:30:59.908: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-b91df1a0-smbn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:379

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 04:38:47.025: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-b91df1a0-smbn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:379

Issues about this test specifically: #37373

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-b91df1a0-b8ml
not to equal
    <string>: gke-bootstrap-e2e-default-pool-b91df1a0-b8ml
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 06:01:35.218: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-b91df1a0-smbn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:379

Issues about this test specifically: #36794

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-b91df1a0-b8ml
to equal
    <string>: gke-bootstrap-e2e-default-pool-b91df1a0-smbn
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:200
Expected error:
    <*errors.errorString | 0xc420f2b180>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:193

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 05:48:32.773: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-b91df1a0-smbn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:379

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 04:42:01.952: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-b91df1a0-smbn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:379

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 05:19:00.026: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-b91df1a0-smbn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:379

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:85
Expected error:
    <*errors.errorString | 0xc421116ed0>: {
        s: "7 / 17 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                     NODE                                         PHASE   GRACE CONDITIONS\nfluentd-gcp-v2.0-28jcb                                  gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:42 -0700 PDT  }]\nkube-dns-806549836-1hvtc                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  }]\nkube-dns-806549836-8tx69                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]\nkube-dns-autoscaler-2925080267-p3vqs                    gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b91df1a0-smbn gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:31 -0700 PDT  }]\nkubernetes-dashboard-2917854236-rd241                   gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]\nl7-default-backend-1044750973-nn64j                     gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  }]\n",
    }
    7 / 17 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                     NODE                                         PHASE   GRACE CONDITIONS
    fluentd-gcp-v2.0-28jcb                                  gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:42 -0700 PDT  }]
    kube-dns-806549836-1hvtc                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  }]
    kube-dns-806549836-8tx69                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]
    kube-dns-autoscaler-2925080267-p3vqs                    gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b91df1a0-smbn gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:31 -0700 PDT  }]
    kubernetes-dashboard-2917854236-rd241                   gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]
    l7-default-backend-1044750973-nn64j                     gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:84

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc420497670>: {
        s: "7 / 17 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                     NODE                                         PHASE   GRACE CONDITIONS\nfluentd-gcp-v2.0-28jcb                                  gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:42 -0700 PDT  }]\nkube-dns-806549836-1hvtc                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  }]\nkube-dns-806549836-8tx69                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]\nkube-dns-autoscaler-2925080267-p3vqs                    gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b91df1a0-smbn gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:31 -0700 PDT  }]\nkubernetes-dashboard-2917854236-rd241                   gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]\nl7-default-backend-1044750973-nn64j                     gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  }]\n",
    }
    7 / 17 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                     NODE                                         PHASE   GRACE CONDITIONS
    fluentd-gcp-v2.0-28jcb                                  gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:42 -0700 PDT  }]
    kube-dns-806549836-1hvtc                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:15 -0700 PDT  }]
    kube-dns-806549836-8tx69                                gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:33:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]
    kube-dns-autoscaler-2925080267-p3vqs                    gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:06 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b91df1a0-smbn gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:17:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:31 -0700 PDT  }]
    kubernetes-dashboard-2917854236-rd241                   gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:37 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:03 -0700 PDT  }]
    l7-default-backend-1044750973-nn64j                     gke-bootstrap-e2e-default-pool-b91df1a0-smbn Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:32:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 01:18:01 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1266/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:431
Apr 27 15:01:17.602: error restarting apiserver: error running gcloud [container clusters --project=jenkins-gke-e2e-serial --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.384+493e4486b69ebd --quiet]; got error exit status 1, stdout "", stderr "Upgrading bootstrap-e2e...\n..........................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1493330386424-d2e18779'\n operationType: OperationTypeValueValuesEnum(UPGRADE_MASTER, 3)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/operations/operation-1493330386424-d2e18779'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'Master upgrade to 1.7.0-alpha.2.384+493e4486b69ebd failed'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/clusters/bootstrap-e2e'\n zone: u'us-central1-f'>] finished with error: Master upgrade to 1.7.0-alpha.2.384+493e4486b69ebd failed\n"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:411

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-d949f3e0-t7sx
not to equal
    <string>: gke-bootstrap-e2e-default-pool-d949f3e0-t7sx
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29512

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bcfb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1267/
Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36950

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31407

Failed: [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] PersistentVolumes:vsphere should test that a file written to the vspehre volume mount before kubelet restart can be read after restart [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31918

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30441

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36457

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29444

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:431
Apr 27 15:56:40.174: error restarting apiserver: error running gcloud [container clusters --project=jenkins-gke-e2e-serial --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.418+c2595909e9d012 --quiet]; got error exit status 1, stdout "", stderr "Upgrading bootstrap-e2e...\n....................................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1493333698983-bda103be'\n operationType: OperationTypeValueValuesEnum(UPGRADE_MASTER, 3)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/operations/operation-1493333698983-bda103be'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'Master upgrade to 1.7.0-alpha.2.418+c2595909e9d012 failed'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/clusters/bootstrap-e2e'\n zone: u'us-central1-f'>] finished with error: Master upgrade to 1.7.0-alpha.2.418+c2595909e9d012 failed\n"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:411

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36794

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37373

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27957

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] should retry creating failed daemon pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29512

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #33883

Failed: [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35277

Failed: [k8s.io] NoExecuteTaintManager [Serial] removing taint cancels eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] DNS configMap nameserver should be able to change stubDomain configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31428

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28071

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1268/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:622
Expected error:
    <*errors.StatusError | 0xc4219e0180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get persistentvolumeclaims pvc-5g32d)",
            Reason: "Unauthorized",
            Details: {
                Name: "pvc-5g32d",
                Group: "",
                Kind: "persistentvolumeclaims",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get persistentvolumeclaims pvc-5g32d)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:620

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27957

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31918

Failed: [k8s.io] Daemon set [Serial] should retry creating failed daemon pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] PersistentVolumes:vsphere should test that a file written to the vspehre volume mount before kubelet restart can be read after restart [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30441

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #34223

Failed: DiffResources {e2e.go}

Error: 27 leaked resources
[ instance-templates ]
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-4d04b6c9  n1-standard-2               2017-04-27T18:03:46.512-07:00
[ instance-groups ]
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-4d04b6c9-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
[ instances ]
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-4d04b6c9-ft3h  us-central1-f  n1-standard-2               10.128.0.3   35.184.247.237  RUNNING
+gke-bootstrap-e2e-default-pool-4d04b6c9-mcwl  us-central1-f  n1-standard-2               10.128.0.5   104.154.25.240  RUNNING
+gke-bootstrap-e2e-default-pool-4d04b6c9-pmvh  us-central1-f  n1-standard-2               10.128.0.2   35.184.178.163  RUNNING
[ disks ]
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-4d04b6c9-ft3h  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-4d04b6c9-mcwl  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-4d04b6c9-pmvh  us-central1-f  100      pd-standard  READY
[ routes ]
+default-route-0d93e15b6d0c72f9                                   bootstrap-e2e           10.148.0.0/20                                                                        1000
[ routes ]
+default-route-3869fe6fab85f54d                                   bootstrap-e2e           0.0.0.0/0      default-internet-gateway                                              1000
+default-route-653f57a095c707db                                   bootstrap-e2e           10.142.0.0/20                                                                        1000
+default-route-6f60b9a598cc7bdc                                   bootstrap-e2e           10.140.0.0/20                                                                        1000
+default-route-8fb29a828b8dd40f                                   bootstrap-e2e           10.138.0.0/20                                                                        1000
+default-route-9e980bcc2a959da4                                   bootstrap-e2e           10.128.0.0/20                                                                        1000
+default-route-a8940264f7d65186                                   bootstrap-e2e           10.146.0.0/20                                                                        1000
+default-route-b2ea277dceb80e50                                   bootstrap-e2e           10.132.0.0/20                                                                        1000
[ routes ]
+gke-bootstrap-e2e-1fc0f930-9e2f289f-2bb2-11e7-bbff-42010af0000d  bootstrap-e2e           10.72.3.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-4d04b6c9-mcwl  1000
+gke-bootstrap-e2e-1fc0f930-ee22aa9b-2bae-11e7-9058-42010af0000d  bootstrap-e2e           10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-4d04b6c9-ft3h  1000
+gke-bootstrap-e2e-1fc0f930-eed26d46-2bae-11e7-9058-42010af0000d  bootstrap-e2e           10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-4d04b6c9-pmvh  1000
[ firewall-rules ]
+NAME                            NETWORK        SRC_RANGES        RULES                         SRC_TAGS  TARGET_TAGS
+gke-bootstrap-e2e-1fc0f930-all  bootstrap-e2e  10.72.0.0/14      tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-1fc0f930-ssh  bootstrap-e2e  35.188.10.136/32  tcp:22                                  gke-bootstrap-e2e-1fc0f930-node
+gke-bootstrap-e2e-1fc0f930-vms  bootstrap-e2e  10.128.0.0/9      tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-1fc0f930-node

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] NoExecuteTaintManager [Serial] removing taint cancels eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36950

Failed: DumpClusterLogs {e2e.go}

error during ./cluster/log-dump.sh /workspace/_artifacts: exit status 1

Issues about this test specifically: #33722 #37578 #38206 #40455 #42934 #43155 #44504

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31428

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31407

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36914

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28019

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36794

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-4d04b6c9-pmvh
not to equal
    <string>: gke-bootstrap-e2e-default-pool-4d04b6c9-pmvh
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37373

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28071

Failed: [k8s.io] DNS configMap nameserver should be able to change stubDomain configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1294/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35279

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36914

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:431
Apr 28 03:23:40.832: error restarting apiserver: error running gcloud [container clusters --project=jenkins-gke-e2e-serial --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.438+19795ea7c3d55f --quiet]; got error exit status 1, stdout "", stderr "Upgrading bootstrap-e2e...\n...............................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1493374924661-56393499'\n operationType: OperationTypeValueValuesEnum(UPGRADE_MASTER, 3)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/operations/operation-1493374924661-56393499'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'Master upgrade to 1.7.0-alpha.2.438+19795ea7c3d55f failed'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/clusters/bootstrap-e2e'\n zone: u'us-central1-f'>] finished with error: Master upgrade to 1.7.0-alpha.2.438+19795ea7c3d55f failed\n"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:411

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31918

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36794

Failed: [k8s.io] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-da760a35-frcg
to equal
    <string>: gke-bootstrap-e2e-default-pool-da760a35-gzgc
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-da760a35-frcg
not to equal
    <string>: gke-bootstrap-e2e-default-pool-da760a35-frcg
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bdce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36457

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1295/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36457

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] NoExecuteTaintManager [Serial] removing taint cancels eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29444

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37373

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31407

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37479

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] should retry creating failed daemon pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc42029b010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:431
Apr 28 07:41:44.910: error restarting apiserver: error running gcloud [container clusters --project=jenkins-gke-e2e-serial --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.440+9afeabb642e03c --quiet]; got error exit status 1, stdout "", stderr "Upgrading bootstrap-e2e...\n....................................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1493390403707-e808f833'\n operationType: OperationTypeValueValuesEnum(UPGRADE_MASTER, 3)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/operations/operation-1493390403707-e808f833'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'Master upgrade to 1.7.0-alpha.2.440+9afeabb642e03c failed'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/clusters/bootstrap-e2e'\n zone: u'us-central1-f'>] finished with error: Master upgrade to 1.7.0-alpha.2.440+9afeabb642e03c failed\n"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:411

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1296/
Multiple broken tests:

Failed: [k8s.io] Daemon set [Serial] should retry creating failed daemon pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29444

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29512

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30441

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #34223

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28071

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35277

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:431
Apr 28 11:47:56.086: error restarting apiserver: error running gcloud [container clusters --project=jenkins-gke-e2e-serial --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.442+8787b13d756c00 --quiet]; got error exit status 1, stdout "", stderr "Upgrading bootstrap-e2e...\n....................................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1493405174897-d2941c54'\n operationType: OperationTypeValueValuesEnum(UPGRADE_MASTER, 3)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/operations/operation-1493405174897-d2941c54'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'Master upgrade to 1.7.0-alpha.2.442+8787b13d756c00 failed'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/clusters/bootstrap-e2e'\n zone: u'us-central1-f'>] finished with error: Master upgrade to 1.7.0-alpha.2.442+8787b13d756c00 failed\n"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:411

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29516

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc800>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial/1297/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36950

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29512

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37479

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29516

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31407

Failed: [k8s.io] NoExecuteTaintManager [Serial] removing taint cancels eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30441

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31428

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:431
Apr 28 14:41:13.053: error restarting apiserver: error running gcloud [container clusters --project=jenkins-gke-e2e-serial --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.470+9fbefe3b972611 --quiet]; got error exit status 1, stdout "", stderr "Upgrading bootstrap-e2e...\n...............................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1493415576900-6fe37af4'\n operationType: OperationTypeValueValuesEnum(UPGRADE_MASTER, 3)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/operations/operation-1493415576900-6fe37af4'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'Master upgrade to 1.7.0-alpha.2.470+9fbefe3b972611 failed'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/346162049984/zones/us-central1-f/clusters/bootstrap-e2e'\n zone: u'us-central1-f'>] finished with error: Master upgrade to 1.7.0-alpha.2.470+9fbefe3b972611 failed\n"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:411

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] PersistentVolumes:vsphere should test that a file written to the vspehre volume mount before kubelet restart can be read after restart [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Daemon set [Serial] should retry creating failed daemon pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35279

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27957

Failed: [k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202d24d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

9 participants