Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-serial-release-1.5: broken test run #37913

Closed
k8s-github-robot opened this issue Dec 2, 2016 · 54 comments
Closed

ci-kubernetes-e2e-gci-gke-serial-release-1.5: broken test run #37913

k8s-github-robot opened this issue Dec 2, 2016 · 54 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/89/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422981ae0>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Dec  1 16:18:00.203: CPU usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1d3b7ab4-98jt:
 container "kubelet": expected 50th% usage < 0.350; got 0.365, container "kubelet": expected 95th% usage < 0.500; got 0.539
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212151c0>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211ddb20>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422195a50>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216b59d0>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421693420>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212082e0>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211ddcd0>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42178ca30>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42161e010>: {
        s: "error while stopping RC: service2: Get https://104.154.250.127/api/v1/namespaces/e2e-tests-services-7chvb/replicationcontrollers/service2: dial tcp 104.154.250.127:443: getsockopt: connection refused",
    }
    error while stopping RC: service2: Get https://104.154.250.127/api/v1/namespaces/e2e-tests-services-7chvb/replicationcontrollers/service2: dial tcp 104.154.250.127:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213b78f0>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215b15b0>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215b82e0>: {
        s: "Namespace e2e-tests-services-7chvb is active",
    }
    Namespace e2e-tests-services-7chvb is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Previous issues for this suite: #37773

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 2, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/94/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220ce2b0>: {
        s: "Namespace e2e-tests-services-hn5kr is active",
    }
    Namespace e2e-tests-services-hn5kr is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc421998000>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 154, 217, 203],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.154.217.203:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211a0e90>: {
        s: "Namespace e2e-tests-services-hn5kr is active",
    }
    Namespace e2e-tests-services-hn5kr is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c3afa0>: {
        s: "Namespace e2e-tests-services-hn5kr is active",
    }
    Namespace e2e-tests-services-hn5kr is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Dec 3, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/109/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213adb00>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42186e920>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420eab6a0>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420eabeb0>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42112e080>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc421bb2050>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 154, 166, 133],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.154.166.133:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b99e40>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b7c040>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a81b20>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ebbfa0>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d6ed20>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421769ef0>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421947640>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a90d70>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f6bfb0>: {
        s: "Namespace e2e-tests-services-l9cw3 is active",
    }
    Namespace e2e-tests-services-l9cw3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/127/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42180a620>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f8d260>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212f5270>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216b1300>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f9a800>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-j09g7     gke-bootstrap-e2e-default-pool-19bbed71-4gzq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:33 -0800 PST ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  }]\nkube-dns-4101612645-wtwzc            gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:41 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:21 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:41 -0800 PST  }]\nkube-dns-autoscaler-2715466192-7xt0b gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:42 -0800 PST  }]\nl7-default-backend-2234341178-6trvm  gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:40 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:13 -0800 PST ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:40 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-j09g7     gke-bootstrap-e2e-default-pool-19bbed71-4gzq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:33 -0800 PST ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  }]
    kube-dns-4101612645-wtwzc            gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:41 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:21 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:41 -0800 PST  }]
    kube-dns-autoscaler-2715466192-7xt0b gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:42 -0800 PST  }]
    l7-default-backend-2234341178-6trvm  gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:40 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:13 -0800 PST ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:40 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421378f80>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-j09g7 gke-bootstrap-e2e-default-pool-19bbed71-4gzq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:33 -0800 PST ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-j09g7 gke-bootstrap-e2e-default-pool-19bbed71-4gzq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:33 -0800 PST ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211b4ea0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-j09g7 gke-bootstrap-e2e-default-pool-19bbed71-4gzq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:33 -0800 PST ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-j09g7 gke-bootstrap-e2e-default-pool-19bbed71-4gzq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:33 -0800 PST ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4211d1d60>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-1d5n8     gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [heapster]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]\nkube-dns-autoscaler-2715466192-1kmrw gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]\nl7-default-backend-2234341178-84wmt  gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-1d5n8     gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [heapster]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]
    kube-dns-autoscaler-2715466192-1kmrw gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]
    l7-default-backend-2234341178-84wmt  gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421807730>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421806e80>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Dec  7 00:38:46.050: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc421ac1970>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-1d5n8     gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [heapster]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]\nkube-dns-autoscaler-2715466192-1kmrw gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]\nl7-default-backend-2234341178-84wmt  gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-1d5n8     gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [heapster]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]
    kube-dns-autoscaler-2715466192-1kmrw gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]
    l7-default-backend-2234341178-84wmt  gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 01:57:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421101900>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4206b6110>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-j09g7     gke-bootstrap-e2e-default-pool-19bbed71-4gzq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:33 -0800 PST ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  }]\nkube-dns-4101612645-wtwzc            gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:41 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:21 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:41 -0800 PST  }]\nkube-dns-autoscaler-2715466192-7xt0b gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:42 -0800 PST  }]\nl7-default-backend-2234341178-6trvm  gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:40 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:13 -0800 PST ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:40 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-j09g7     gke-bootstrap-e2e-default-pool-19bbed71-4gzq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:33 -0800 PST ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST  }]
    kube-dns-4101612645-wtwzc            gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:41 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:21 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:41 -0800 PST  }]
    kube-dns-autoscaler-2715466192-7xt0b gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:42 -0800 PST  }]
    l7-default-backend-2234341178-6trvm  gke-bootstrap-e2e-default-pool-19bbed71-px14 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:40 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:20:13 -0800 PST ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 00:19:40 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Dec  7 02:32:40.007: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42184dd60>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42088a1f0>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ff6e00>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42155d050>: {
        s: "error while stopping RC: service2: Get https://104.198.30.102/api/v1/namespaces/e2e-tests-services-j5mth/replicationcontrollers/service2: dial tcp 104.198.30.102:443: getsockopt: connection refused",
    }
    error while stopping RC: service2: Get https://104.198.30.102/api/v1/namespaces/e2e-tests-services-j5mth/replicationcontrollers/service2: dial tcp 104.198.30.102:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210fec10>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Dec  7 05:14:18.949: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Dec  7 02:48:56.099: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421cebd80>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Dec  7 01:49:38.011: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42160d5a0>: {
        s: "Namespace e2e-tests-services-j5mth is active",
    }
    Namespace e2e-tests-services-j5mth is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Dec  7 04:57:37.109: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/140/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219d9720>: {
        s: "Namespace e2e-tests-services-lnk7n is active",
    }
    Namespace e2e-tests-services-lnk7n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f3f750>: {
        s: "Namespace e2e-tests-services-lnk7n is active",
    }
    Namespace e2e-tests-services-lnk7n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421798550>: {
        s: "Namespace e2e-tests-services-lnk7n is active",
    }
    Namespace e2e-tests-services-lnk7n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420fdaf70>: {
        s: "Namespace e2e-tests-services-lnk7n is active",
    }
    Namespace e2e-tests-services-lnk7n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4206ba3f0>: {
        s: "error while stopping RC: service2: unexpected EOF",
    }
    error while stopping RC: service2: unexpected EOF
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c13740>: {
        s: "Namespace e2e-tests-services-lnk7n is active",
    }
    Namespace e2e-tests-services-lnk7n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42175fff0>: {
        s: "Namespace e2e-tests-services-lnk7n is active",
    }
    Namespace e2e-tests-services-lnk7n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42128b660>: {
        s: "Namespace e2e-tests-services-lnk7n is active",
    }
    Namespace e2e-tests-services-lnk7n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216668f0>: {
        s: "Namespace e2e-tests-services-lnk7n is active",
    }
    Namespace e2e-tests-services-lnk7n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/141/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c35f30>: {
        s: "Namespace e2e-tests-services-p5739 is active",
    }
    Namespace e2e-tests-services-p5739 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42199a1b0>: {
        s: "Namespace e2e-tests-services-p5739 is active",
    }
    Namespace e2e-tests-services-p5739 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42065d050>: {
        s: "Namespace e2e-tests-services-p5739 is active",
    }
    Namespace e2e-tests-services-p5739 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214e0d70>: {
        s: "Namespace e2e-tests-services-p5739 is active",
    }
    Namespace e2e-tests-services-p5739 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42199b9a0>: {
        s: "Namespace e2e-tests-services-p5739 is active",
    }
    Namespace e2e-tests-services-p5739 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218bce20>: {
        s: "Namespace e2e-tests-services-p5739 is active",
    }
    Namespace e2e-tests-services-p5739 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221db9b0>: {
        s: "Namespace e2e-tests-services-p5739 is active",
    }
    Namespace e2e-tests-services-p5739 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b67500>: {
        s: "Namespace e2e-tests-services-p5739 is active",
    }
    Namespace e2e-tests-services-p5739 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc420d99e60>: {
        s: "error while stopping RC: service2: timed out waiting for \"service2\" to be synced",
    }
    error while stopping RC: service2: timed out waiting for "service2" to be synced
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216c13a0>: {
        s: "Namespace e2e-tests-services-p5739 is active",
    }
    Namespace e2e-tests-services-p5739 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/146/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216505a0>: {
        s: "Namespace e2e-tests-services-2c9mt is active",
    }
    Namespace e2e-tests-services-2c9mt is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421798bd0>: {
        s: "error while stopping RC: service2: Get https://104.154.148.2/api/v1/namespaces/e2e-tests-services-2c9mt/replicationcontrollers/service2: dial tcp 104.154.148.2:443: getsockopt: connection refused",
    }
    error while stopping RC: service2: Get https://104.154.148.2/api/v1/namespaces/e2e-tests-services-2c9mt/replicationcontrollers/service2: dial tcp 104.154.148.2:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42177afc0>: {
        s: "Namespace e2e-tests-services-2c9mt is active",
    }
    Namespace e2e-tests-services-2c9mt is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217f0cb0>: {
        s: "Namespace e2e-tests-services-2c9mt is active",
    }
    Namespace e2e-tests-services-2c9mt is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218e5480>: {
        s: "Namespace e2e-tests-services-2c9mt is active",
    }
    Namespace e2e-tests-services-2c9mt is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/147/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ba0e40>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc420e05090>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 8, 35, 196, 160],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 8.35.196.160:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42197a240>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213458c0>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a0b6b0>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421937360>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421966c70>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219ae2e0>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212c5120>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42197b7f0>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421923920>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212bc070>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219a34d0>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f40fb0>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c8f7f0>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4209bd560>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212b99e0>: {
        s: "Namespace e2e-tests-services-4d1rs is active",
    }
    Namespace e2e-tests-services-4d1rs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/151/

Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc421e55590>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 197, 167, 192],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.197.167.192:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421224400>: {
        s: "Namespace e2e-tests-services-mxf50 is active",
    }
    Namespace e2e-tests-services-mxf50 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d51580>: {
        s: "Namespace e2e-tests-services-mxf50 is active",
    }
    Namespace e2e-tests-services-mxf50 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211367e0>: {
        s: "Namespace e2e-tests-services-mxf50 is active",
    }
    Namespace e2e-tests-services-mxf50 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/153/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421678020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420a40020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc420a5ab90>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc420e1e000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42164a9b0>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42164a1d0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/155/

Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4210c2150>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420b70000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42170efd0>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc421222c20>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc4210c20c0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc420ac6000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/156/

Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Dec 12 13:52:36.337: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc42036ee80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1643

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42036ee80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1643

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213ba0b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-ffvvb gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-ffvvb gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc422072ad0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-ffvvb gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-ffvvb gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421820fa0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-ffvvb gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-ffvvb gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421740800>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-x3460 gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 12:29:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 12:29:57 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 12:29:57 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-x3460 gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 12:29:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 12:29:57 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 12:29:57 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42208ee30>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-ffvvb gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-ffvvb gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc421448220>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0-2168613315-ffvvb gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0-2168613315-ffvvb gke-bootstrap-e2e-default-pool-09b71a52-iffu Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-12 13:01:00 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/157/

Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4216c4ae0>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4216c5240>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4212d0020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc4206ce010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4213e4000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/178/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421892020>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc420f3e460>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 32, 34],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.32.34:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:417

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210388d0>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b8b330>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216ad310>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214d9980>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b56a10>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218bc6b0>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420fdb9d0>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420a52650>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ee9590>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42109c220>: {
        s: "Namespace e2e-tests-services-26g5p is active",
    }
    Namespace e2e-tests-services-26g5p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/187/

Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4218be2b0>: {
        s: "error while stopping RC: service2: timed out waiting for \"service2\" to be synced",
    }
    error while stopping RC: service2: timed out waiting for "service2" to be synced
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420df0f00>: {
        s: "Namespace e2e-tests-services-6tp77 is active",
    }
    Namespace e2e-tests-services-6tp77 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42199e3b0>: {
        s: "Namespace e2e-tests-services-6tp77 is active",
    }
    Namespace e2e-tests-services-6tp77 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/188/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c2b030>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e693a0>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421179080>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211795f0>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421716400>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420a16ad0>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d68c00>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c2b740>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ea9480>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4206fba70>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421219ed0>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*url.Error | 0xc4211c4c60>: {
        Op: "Get",
        URL: "https://35.184.48.55/api/v1/namespaces/e2e-tests-services-zffst/services/service2",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 48, 55],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.48.55/api/v1/namespaces/e2e-tests-services-zffst/services/service2: dial tcp 35.184.48.55:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:444

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215300b0>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b1fde0>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421931f50>: {
        s: "Namespace e2e-tests-services-zffst is active",
    }
    Namespace e2e-tests-services-zffst is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/198/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4206c5e80>: {
        s: "Namespace e2e-tests-services-bv272 is active",
    }
    Namespace e2e-tests-services-bv272 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c8f470>: {
        s: "Namespace e2e-tests-services-bv272 is active",
    }
    Namespace e2e-tests-services-bv272 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421305310>: {
        s: "Namespace e2e-tests-services-bv272 is active",
    }
    Namespace e2e-tests-services-bv272 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*url.Error | 0xc42145d410>: {
        Op: "Get",
        URL: "https://104.154.210.133/api/v1/namespaces/e2e-tests-services-bv272/services/service2",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 154, 210, 133],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.154.210.133/api/v1/namespaces/e2e-tests-services-bv272/services/service2: dial tcp 104.154.210.133:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:444

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214a4660>: {
        s: "Namespace e2e-tests-services-bv272 is active",
    }
    Namespace e2e-tests-services-bv272 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42193b030>: {
        s: "Namespace e2e-tests-services-bv272 is active",
    }
    Namespace e2e-tests-services-bv272 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42180d5b0>: {
        s: "Namespace e2e-tests-services-bv272 is active",
    }
    Namespace e2e-tests-services-bv272 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c6a610>: {
        s: "Namespace e2e-tests-services-bv272 is active",
    }
    Namespace e2e-tests-services-bv272 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421475ad0>: {
        s: "Namespace e2e-tests-services-bv272 is active",
    }
    Namespace e2e-tests-services-bv272 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/204/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42087b5c0>: {
        s: "Namespace e2e-tests-services-700g6 is active",
    }
    Namespace e2e-tests-services-700g6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c3e1d0>: {
        s: "Namespace e2e-tests-services-700g6 is active",
    }
    Namespace e2e-tests-services-700g6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42127b870>: {
        s: "Namespace e2e-tests-services-700g6 is active",
    }
    Namespace e2e-tests-services-700g6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215f4850>: {
        s: "Namespace e2e-tests-services-700g6 is active",
    }
    Namespace e2e-tests-services-700g6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc42137dd60>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 69, 38],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.69.38:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:417

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/218/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d083d0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc420c90b40>: {
        s: "couldn't find 10 pods within 5m0s; last error: expected to find 10 pods but found only 11",
    }
    couldn't find 10 pods within 5m0s; last error: expected to find 10 pods but found only 11
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:119

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Dec 24 03:04:47.581: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420fe84a0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420917fd0>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nkube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\nkube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]\nkubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    kube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    kube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]
    kubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:246
Dec 24 02:36:34.834: Pods on node gke-bootstrap-e2e-default-pool-3052d533-2gfn are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:182

Issues about this test specifically: #36794

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc420795460>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213638c0>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nkube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\nkube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]\nkubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    kube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    kube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]
    kubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420def4b0>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:07 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:06:45 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:07:00 -0800 PST  }]\nkube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\nkube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]\nkubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:07 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:06:45 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:07:00 -0800 PST  }]
    kube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    kube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]
    kubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b5d3f0>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:07 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:06:45 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:07:00 -0800 PST  }]\nkube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\nkube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]\nkubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:07 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:06:45 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:07:00 -0800 PST  }]
    kube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    kube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]
    kubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ace090>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4209a7d30>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Dec 24 01:49:53.733: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b86120>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d60320>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nkube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\nkube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]\nkubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    kube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    kube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]
    kubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.StatusError | 0xc421070300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.96.2.42:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rs-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rs-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.96.2.42:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.96.2.42:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'") has prevented the request from succeeding (post services rs-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:212

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420dfe010>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:07 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:06:45 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:07:00 -0800 PST  }]\nkube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\nkube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]\nkubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:07 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:06:45 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:07:00 -0800 PST  }]
    kube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    kube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]
    kubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c22010>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nkube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\nkube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]\nkubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    kube-dns-4101612645-clshg                                          gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:25 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    kube-dns-autoscaler-2715466192-lsgzf                               gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:57 -0800 PST  }]
    kubernetes-dashboard-3543765157-dcd22                              gke-bootstrap-e2e-default-pool-3052d533-2gfn Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:20 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:04:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42102e030>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b87650>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:05:04 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42137c4a0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-hsxw gke-bootstrap-e2e-default-pool-3052d533-hsxw Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 02:03:39 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-3052d533-ldsk gke-bootstrap-e2e-default-pool-3052d533-ldsk Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-24 01:03:42 -

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/228/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 25 22:12:19.209: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420edd678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 25 23:06:41.128: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219d6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37479

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 25 22:42:03.657: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420edf678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35277

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 25 21:10:52.301: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42033b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 25 22:29:06.560: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42137c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b7b7d0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a1e60965-6psc gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-a1e60965-6psc            gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a1e60965-6psc gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-a1e60965-6psc            gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4208a31a0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-a1e60965-6psc boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-a1e60965-6psc boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42175a5c0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a1e60965-6psc gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-a1e60965-6psc            gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a1e60965-6psc gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-a1e60965-6psc            gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 25 21:26:13.449: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420af2278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b32550>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a1e60965-6psc gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-a1e60965-6psc            gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a1e60965-6psc gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-a1e60965-6psc            gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211eb280>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a1e60965-6psc gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-a1e60965-6psc            gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a1e60965-6psc gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-a1e60965-6psc            gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215eb800>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a1e60965-6psc gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-a1e60965-6psc            gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-a1e60965-6psc gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-a1e60965-6psc            gke-bootstrap-e2e-default-pool-a1e60965-6psc Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 19:55:13 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:07 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-25 20:54:06 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/235/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421239e40>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42064aa90>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4205e7610>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:48:58 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:52:40 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:52:41 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:52:41 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:52:41 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:48:58 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:52:40 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:52:41 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:52:41 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:52:41 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42064af60>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42171f0c0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f5e3d0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42134dd20>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc42178fdf0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc421650750>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421269de0>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211a05b0>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:34:48 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:34:48 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4206d7c40>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:34:48 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:34:48 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f5f7b0>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211fc680>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 05:35:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Dec 27 04:07:41.875: CPU usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-03ec477e-9cl6:
 container "runtime": expected 95th% usage < 0.200; got 0.280
node gke-bootstrap-e2e-default-pool-03ec477e-pbq7:
 container "runtime": expected 95th% usage < 0.200; got 0.500
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420731580>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:246
Dec 27 07:38:28.997: Pods on node gke-bootstrap-e2e-default-pool-03ec477e-pbq7 are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:182

Issues about this test specifically: #36794

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e5c450>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]\nkube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\nkube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-9cl6 gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:14 -0800 PST  }]
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-03ec477e-pbq7 gke-bootstrap-e2e-default-pool-03ec477e-pbq7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 03:01:16 -0800 PST  }]
    kube-dns-4101612645-pfc4f                                          gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:16 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    kube-dns-autoscaler-2715466192-dz085                               gke-bootstrap-e2e-default-pool-03ec477e-9cl6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:51:11 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-27 02:50:36 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Dec 27 04:30:04.111: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Dec 27 05:27:17.704: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Dec 27 04:52:33.127: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/236/
Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc4212a0fe0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc42133a0d0>: {
        s: "service verification failed for: 10.99.244.43\nexpected [service1-0nv64 service1-k64wp service1-n4fjr]\nreceived [service1-n4fjr wget: download timed out]",
    }
    service verification failed for: 10.99.244.43
    expected [service1-0nv64 service1-k64wp service1-n4fjr]
    received [service1-n4fjr wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:246
Dec 27 11:03:53.075: Pods on node gke-bootstrap-e2e-default-pool-09095945-0l5z are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:182

Issues about this test specifically: #36794

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Dec 27 12:05:40.947: Number of replicas has changed: expected 3, got 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:292

Issues about this test specifically: #28657 #30519 #33878

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/240/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc42109ed20>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 3, 99],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.3.99:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219d3a70>: {
        s: "Namespace e2e-tests-services-1pz4n is active",
    }
    Namespace e2e-tests-services-1pz4n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42165abd0>: {
        s: "Namespace e2e-tests-services-1pz4n is active",
    }
    Namespace e2e-tests-services-1pz4n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42165a5e0>: {
        s: "Namespace e2e-tests-services-1pz4n is active",
    }
    Namespace e2e-tests-services-1pz4n is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/249/
Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-3ad4-pvc-8aa8288d-ce56-11e6-9e2a-42010af00033  us-central1-a  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220fe270>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 30 00:39:54.328: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220538f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Dec 29 22:26:16.749: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc42204cfc0>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/258/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42110abe0>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210b8f50>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421bad620>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc420c107d0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 154, 132, 165],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.154.132.165:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420aa3a80>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c6b630>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216721b0>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217dc9e0>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42136a300>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4206c85a0>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212d8950>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c48e00>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4209ace00>: {
        s: "Namespace e2e-tests-services-6vjs0 is active",
    }
    Namespace e2e-tests-services-6vjs0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/259/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421019130>: {
        s: "Namespace e2e-tests-services-7bw2j is active",
    }
    Namespace e2e-tests-services-7bw2j is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422047d70>: {
        s: "Namespace e2e-tests-services-7bw2j is active",
    }
    Namespace e2e-tests-services-7bw2j is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212ee2b0>: {
        s: "Namespace e2e-tests-services-7bw2j is active",
    }
    Namespace e2e-tests-services-7bw2j is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a881d0>: {
        s: "Namespace e2e-tests-services-7bw2j is active",
    }
    Namespace e2e-tests-services-7bw2j is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f48b70>: {
        s: "Namespace e2e-tests-services-7bw2j is active",
    }
    Namespace e2e-tests-services-7bw2j is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421ff3100>: {
        s: "error while stopping RC: service1: Get https://35.184.63.200/api/v1/namespaces/e2e-tests-services-7bw2j/replicationcontrollers/service1: dial tcp 35.184.63.200:443: getsockopt: connection refused",
    }
    error while stopping RC: service1: Get https://35.184.63.200/api/v1/namespaces/e2e-tests-services-7bw2j/replicationcontrollers/service1: dial tcp 35.184.63.200:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:417

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/260/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e45330>: {
        s: "Namespace e2e-tests-services-jk765 is active",
    }
    Namespace e2e-tests-services-jk765 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ba2740>: {
        s: "Namespace e2e-tests-services-jk765 is active",
    }
    Namespace e2e-tests-services-jk765 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421996e30>: {
        s: "Namespace e2e-tests-services-jk765 is active",
    }
    Namespace e2e-tests-services-jk765 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc422451450>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 63, 200],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.63.200:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203ecdd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28657 #30519 #33878

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/263/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc420cc4b50>: {
        s: "error while stopping RC: service2: timed out waiting for \"service2\" to be synced",
    }
    error while stopping RC: service2: timed out waiting for "service2" to be synced
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216865c0>: {
        s: "Namespace e2e-tests-services-cb2b4 is active",
    }
    Namespace e2e-tests-services-cb2b4 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213a3740>: {
        s: "Namespace e2e-tests-services-cb2b4 is active",
    }
    Namespace e2e-tests-services-cb2b4 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/266/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f5a060>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420decfb0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-99d45fad-zf1c gke-bootstrap-e2e-default-pool-99d45fad-zf1c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 22:10:02 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-01 22:10:43 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 23:18:33 -0800 PST  }]\nheapster-v1.2.0-2168613315-h99l4                                   gke-bootstrap-e2e-default-pool-99d45fad-zf1c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 22:12:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-01 22:12:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 22:12:15 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-99d45fad-zf1c            gke-bootstrap-e2e-default-pool-99d45fad-zf1c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 23:18:33 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-01 23:18:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 23:18:33 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-99d45fad-zf1c gke-bootstrap-e2e-default-pool-99d45fad-zf1c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 22:10:02 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-01 22:10:43 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 23:18:33 -0800 PST  }]
    heapster-v1.2.0-2168613315-h99l4                                   gke-bootstrap-e2e-default-pool-99d45fad-zf1c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 22:12:15 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-01 22:12:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 22:12:15 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-99d45fad-zf1c            gke-bootstrap-e2e-default-pool-99d45fad-zf1c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 23:18:33 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-01 23:18:33 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-01 23:18:33 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:148
Expected error:
    <*errors.errorString | 0xc42038ccf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:122

Issues about this test specifically: #31428

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 01:45:05.368: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211ca4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 00:33:18.199: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e9a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 00:42:27.338: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212b4ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 00:17:41.648: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ee64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 00:46:11.872: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214fc4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30441

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  2 00:12:51.483: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c96ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-99d45fad-zf1c\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-99d45fad-zf1c" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc420e49180>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-99d45fad-zf1c boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-99d45fad-zf1c boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/290/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d96db0>: {
        s: "Namespace e2e-tests-services-bjdv1 is active",
    }
    Namespace e2e-tests-services-bjdv1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc42177fe50>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 73, 207],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.198.73.207:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b896c0>: {
        s: "Namespace e2e-tests-services-bjdv1 is active",
    }
    Namespace e2e-tests-services-bjdv1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e35dc0>: {
        s: "Namespace e2e-tests-services-bjdv1 is active",
    }
    Namespace e2e-tests-services-bjdv1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42145c640>: {
        s: "Namespace e2e-tests-services-bjdv1 is active",
    }
    Namespace e2e-tests-services-bjdv1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a03920>: {
        s: "Namespace e2e-tests-services-bjdv1 is active",
    }
    Namespace e2e-tests-services-bjdv1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/300/
Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc42113c010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421990010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc4219ea010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421636130>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421636d20>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:444

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc420fd70b0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/304/
Multiple broken tests:

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Jan  9 05:28:56.889: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Jan  9 05:44:35.509: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Jan  9 04:12:44.426: CPU usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-b4ac3406-pfp3:
 container "runtime": expected 95th% usage < 0.500; got 0.610
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42104bf20>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:48:56 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:48:56 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421202b30>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan  9 03:28:13.038: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42188c420>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b91600>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215a2e20>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42186bfc0>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42206c2b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Jan  9 06:26:01.681: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d094b0>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420db6490>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:48:56 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:48:56 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f40c10>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc420fafbe0>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:48:56 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:48:56 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42139df60>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422015c20>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]\nkube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\nkubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 gke-bootstrap-e2e-default-pool-b4ac3406-pfp3 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:45:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:46:58 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 04:42:26 -0800 PST  }]
    kube-dns-4101612645-40wfs                                          gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:38 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kube-dns-autoscaler-2715466192-3vmrj                               gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    kubernetes-dashboard-3543765157-qm9w3                              gke-bootstrap-e2e-default-pool-b4ac3406-h383 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:32 -0800 PST ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-09 02:47:08 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/309/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420dec360>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214b8720>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218180c0>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b088e0>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f58e30>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219f5170>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421dde8b0>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*url.Error | 0xc421523860>: {
        Op: "Get",
        URL: "https://35.184.44.145/api/v1/namespaces/e2e-tests-services-hs2qn/services/service2",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 44, 145],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.44.145/api/v1/namespaces/e2e-tests-services-hs2qn/services/service2: dial tcp 35.184.44.145:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:444

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421335580>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42153ed90>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c6a8e0>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421791ca0>: {
        s: "Namespace e2e-tests-services-hs2qn is active",
    }
    Namespace e2e-tests-services-hs2qn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/312/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213c5770>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216ec690>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421737770>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421208510>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216eca80>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b721e0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216ecfa0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Expected error:
    <*errors.StatusError | 0xc420216880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "client: etcd cluster is unavailable or misconfigured",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    client: etcd cluster is unavailable or misconfigured
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:382

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421eb5320>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420772d80>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42042fa30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b73ed0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42042fa30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42042fa30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f59450>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42127f1b0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421816ca0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421414790>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-18725 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/318/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42105a880>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Expected error:
    <*errors.errorString | 0xc4203acd70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32945

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-1171-pvc-faa1d8d1-d874-11e6-af8b-42010af00037  us-central1-a  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42112a000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420720000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 11 19:27:51.065: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc420720ca0>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Expected error:
    <*errors.errorString | 0xc420e487c0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:247
Expected error:
    <*errors.errorString | 0xc420e48a20>: {
        s: "Only 0 pods started out of 10",
    }
    Only 0 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:215

Issues about this test specifically: #31407

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:47
Expected
    <int>: 0
not to be zero-valued
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:43

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:247
Expected error:
    <*errors.errorString | 0xc420e48960>: {
        s: "Only 0 pods started out of 10",
    }
    Only 0 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:215

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/333/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d4a310>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42145c720>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421807ea0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421446770>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212669c0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc421af2a60>: {
        s: "Only 3 pods started out of 5",
    }
    Only 3 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420922c50>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c73620>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420fab940>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42191da80>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42187d870>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421447fe0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.errorString | 0xc42138ea60>: {
        s: "Only 3 pods started out of 5",
    }
    Only 3 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:359

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ff0b50>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421800b20>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*url.Error | 0xc4214b8ed0>: {
        Op: "Get",
        URL: "https://130.211.227.212/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-p05lw/endpoints",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 130, 211, 227, 212],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://130.211.227.212/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-p05lw/endpoints: dial tcp 130.211.227.212:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:399

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d4a7f0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-p05lw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/360/
Multiple broken tests:

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:161
Failed waiting for pod wrapped-volume-race-10502997-dec4-11e6-a0d8-0242ac110006-637b7 to enter running state
Expected error:
    <*errors.errorString | 0xc4203ff670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:383

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203ff670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #31918

Failed: DiffResources {e2e.go}

Error: 19 leaked resources
[ instance-templates ]
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-0e99aa7b  n1-standard-2               2017-01-19T16:34:37.375-08:00
[ instance-groups ]
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-0e99aa7b-grp  us-central1-a  zone   bootstrap-e2e  Yes      3
[ instances ]
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-0e99aa7b-ls4q  us-central1-a  n1-standard-2               10.240.0.2   104.154.161.152  RUNNING
+gke-bootstrap-e2e-default-pool-0e99aa7b-n8l3  us-central1-a  n1-standard-2               10.240.0.4   35.184.29.170    RUNNING
+gke-bootstrap-e2e-default-pool-0e99aa7b-rb5f  us-central1-a  n1-standard-2               10.240.0.3   104.154.195.32   RUNNING
[ disks ]
+gke-bootstrap-e2e-default-pool-0e99aa7b-ls4q                     us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-0e99aa7b-n8l3                     us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-0e99aa7b-rb5f                     us-central1-a  100      pd-standard  READY
[ routes ]
+default-route-a5402fa7cf71fa22                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+default-route-bc91f7b4d5b548a3                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
[ routes ]
+gke-bootstrap-e2e-d8ac9a87-8e4b4d90-deb2-11e6-bfde-42010af00019  bootstrap-e2e  10.96.0.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-0e99aa7b-ls4q  1000
+gke-bootstrap-e2e-d8ac9a87-9e389997-dea8-11e6-9ec6-42010af00035  bootstrap-e2e  10.96.1.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-0e99aa7b-rb5f  1000
+gke-bootstrap-e2e-d8ac9a87-9e545f34-dea8-11e6-9ec6-42010af00035  bootstrap-e2e  10.96.2.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-0e99aa7b-n8l3  1000
[ firewall-rules ]
+gke-bootstrap-e2e-d8ac9a87-all  bootstrap-e2e  10.96.0.0/14      udp,icmp,esp,ah,sctp,tcp
+gke-bootstrap-e2e-d8ac9a87-ssh  bootstrap-e2e  35.184.23.203/32  tcp:22                                  gke-bootstrap-e2e-d8ac9a87-node
+gke-bootstrap-e2e-d8ac9a87-vms  bootstrap-e2e  10.240.0.0/16     tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-d8ac9a87-node

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Expected error:
    <*url.Error | 0xc421f100c0>: {
        Op: "Get",
        URL: "https://35.184.23.203/api/v1/namespaces/e2e-tests-emptydir-wrapper-djrhr/replicationcontrollers/wrapped-volume-race-67dd3eb8-deca-11e6-a0d8-0242ac110006",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 23, 203],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.23.203/api/v1/namespaces/e2e-tests-emptydir-wrapper-djrhr/replicationcontrollers/wrapped-volume-race-67dd3eb8-deca-11e6-a0d8-0242ac110006: dial tcp 35.184.23.203:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:369

Issues about this test specifically: #32945

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203ff670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420dda6e0>: {
        s: "Namespace e2e-tests-emptydir-wrapper-7fz9d is active",
    }
    Namespace e2e-tests-emptydir-wrapper-7fz9d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203ff670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203ff670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212fbb00>: {
        s: "Namespace e2e-tests-emptydir-wrapper-7fz9d is active",
    }
    Namespace e2e-tests-emptydir-wrapper-7fz9d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420708220>: {
        s: "Namespace e2e-tests-emptydir-wrapper-7fz9d is active",
    }
    Namespace e2e-tests-emptydir-wrapper-7fz9d is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:475
Pod was not deleted during network partition.
Expected
    <*url.Error | 0xc421c2e180>: {
        Op: "Get",
        URL: "https://35.184.23.203/api/v1/namespaces/e2e-tests-network-partition-2sxc8/pods?labelSelector=job%3Dnetwork-partition",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 23, 203],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
to equal
    <*errors.errorString | 0xc4203ff670>: {
        s: "timed out waiting for the condition",
    }
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:464

Issues about this test specifically: #36950

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/381/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc421145340>: {
        s: "service verification failed for: 10.99.255.57\nexpected [service1-7td0k service1-vg1lt service1-vl1v4]\nreceived [service1-7td0k service1-vl1v4]",
    }
    service verification failed for: 10.99.255.57
    expected [service1-7td0k service1-vg1lt service1-vl1v4]
    received [service1-7td0k service1-vl1v4]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Failed waiting for pod wrapped-volume-race-7d04e0b0-df5f-11e6-8e2c-0242ac11000a-64h1t to enter running state
Expected error:
    <*errors.errorString | 0xc42043ee70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:383

Issues about this test specifically: #32945

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Jan 20 14:50:11.919: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Jan 20 15:13:20.553: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan 20 13:41:36.079: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421086c50>: {
        s: "service verification failed for: 10.99.244.148\nexpected [service1-jznzs service1-n8652 service1-tps66]\nreceived [service1-jznzs service1-n8652]",
    }
    service verification failed for: 10.99.244.148
    expected [service1-jznzs service1-n8652 service1-tps66]
    received [service1-jznzs service1-n8652]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/433/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a9ec70>: {
        s: "Namespace e2e-tests-services-wx2j7 is active",
    }
    Namespace e2e-tests-services-wx2j7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:47
Expected
    <int>: 0
not to be zero-valued
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:43

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420322e10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211cd990>: {
        s: "Namespace e2e-tests-services-wx2j7 is active",
    }
    Namespace e2e-tests-services-wx2j7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215ef350>: {
        s: "Namespace e2e-tests-services-wx2j7 is active",
    }
    Namespace e2e-tests-services-wx2j7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420322e10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214572b0>: {
        s: "Namespace e2e-tests-services-wx2j7 is active",
    }
    Namespace e2e-tests-services-wx2j7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420322e10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Jan 30 11:40:52.301: error while waiting for apiserver up: waiting for apiserver timed out
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:437

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420baf0b0>: {
        s: "Namespace e2e-tests-services-wx2j7 is active",
    }
    Namespace e2e-tests-services-wx2j7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420322e10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217f9d30>: {
        s: "Namespace e2e-tests-services-wx2j7 is active",
    }
    Namespace e2e-tests-services-wx2j7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420322e10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36794

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420322e10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #31428

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c7ef80>: {
        s: "Namespace e2e-tests-services-wx2j7 is active",
    }
    Namespace e2e-tests-services-wx2j7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Expected error:
    <*errors.errorString | 0xc420e5e960>: {
        s: "at least one node failed to be ready",
    }
    at least one node failed to be ready
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:397

Issues about this test specifically: #37373

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/446/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4224dcf50>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc4225e6010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4219f2d70>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:374

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc420c12010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420c94030>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:86
Node gke-bootstrap-e2e-default-pool-476b3687-gmxh did not run out of disk within 5m0s
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:251

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/449/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4206f3a50>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []\nkube-proxy-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj            gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []
    kube-proxy-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj            gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:105
Feb  2 17:19:10.018: Test failed; at least one node failed to reboot in the time given.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:169

Issues about this test specifically: #34123 #35398

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214caf00>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []\nkube-proxy-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj            gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []
    kube-proxy-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj            gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f956a0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []\nkube-proxy-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj            gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []
    kube-proxy-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj            gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d4b400>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []\nkube-proxy-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj            gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []
    kube-proxy-gke-bootstrap-e2e-default-pool-5eb3b941-c5bj            gke-bootstrap-e2e-default-pool-5eb3b941-c5bj Pending       []
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/453/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc422162b90>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc422162010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:86
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:255

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42120a010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Feb  3 13:44:27.230: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:111
Feb  3 14:17:42.972: Test failed; at least one node failed to reboot in the time given.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:169

Issues about this test specifically: #33407 #33623

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1.5/454/
Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 19 leaked resources
[ instance-templates ]
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-fa3dc814  n1-standard-2               2017-02-03T17:02:37.612-08:00
[ instance-groups ]
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-fa3dc814-grp  us-central1-a  zone   bootstrap-e2e  Yes      3
[ instances ]
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-fa3dc814-2wkm  us-central1-a  n1-standard-2               10.240.0.5   146.148.97.98   RUNNING
+gke-bootstrap-e2e-default-pool-fa3dc814-cgp9  us-central1-a  n1-standard-2               10.240.0.4   107.178.221.51  RUNNING
+gke-bootstrap-e2e-default-pool-fa3dc814-qbrd  us-central1-a  n1-standard-2               10.240.0.3   130.211.213.56  RUNNING
[ disks ]
+gke-bootstrap-e2e-default-pool-fa3dc814-2wkm                     us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-fa3dc814-cgp9                     us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-fa3dc814-qbrd                     us-central1-a  100      pd-standard  READY
[ routes ]
+default-route-764eff759c6f1551                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
[ routes ]
+default-route-869af3f6b3a4f852                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
[ routes ]
+gke-bootstrap-e2e-aea57b6e-9f62e256-ea93-11e6-8def-42010af0002b  bootstrap-e2e  10.96.3.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-fa3dc814-2wkm  1000
+gke-bootstrap-e2e-aea57b6e-e918de21-ea75-11e6-8def-42010af0002b  bootstrap-e2e  10.96.0.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-fa3dc814-cgp9  1000
+gke-bootstrap-e2e-aea57b6e-e960b1ce-ea75-11e6-8def-42010af0002b  bootstrap-e2e  10.96.1.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-fa3dc814-qbrd  1000
[ firewall-rules ]
+gke-bootstrap-e2e-aea57b6e-all  bootstrap-e2e  10.96.0.0/14       tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-aea57b6e-ssh  bootstrap-e2e  104.198.185.95/32  tcp:22                                  gke-bootstrap-e2e-aea57b6e-node
+gke-bootstrap-e2e-aea57b6e-vms  bootstrap-e2e  10.240.0.0/16      tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-aea57b6e-node

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.StatusError | 0xc42195df80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.96.0.107:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rs-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rs-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.96.0.107:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.96.0.107:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'") has prevented the request from succeeding (post services rs-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:212

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Feb  3 22:00:36.984: error restarting apiserver: error running gcloud [container clusters --project=k8s-gci-gke-serial-1-5 --zone=us-central1-a upgrade bootstrap-e2e --master --cluster-version=1.5.3-beta.0.70+b15f9368ce432f --quiet]; got error signal: interrupt, stdout "", stderr "Upgrading bootstrap-e2e...\n\n\nCommand killed by keyboard interrupt\n\n"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:433

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:86
Node gke-bootstrap-e2e-default-pool-fa3dc814-cgp9 did not run out of disk within 5m0s
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:251

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@spxtr spxtr closed this as completed Feb 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

2 participants