Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-kops-aws-serial: broken test run #42602

Closed
k8s-github-robot opened this issue Mar 6, 2017 · 374 comments
Closed

ci-kubernetes-e2e-kops-aws-serial: broken test run #42602

k8s-github-robot opened this issue Mar 6, 2017 · 374 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/1/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  6 14:29:18.128: Node ip-172-20-56-237.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203d8250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc423bd22e0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc4203d8250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-46-199.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-179.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-37.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-56-237.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-6.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-46-199.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-49-179.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-37.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-56-237.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-63-6.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-46-199.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-179.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-37.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-56-237.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-6.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-46-199.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-49-179.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-37.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-56-237.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-63-6.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:264
Expected error:
    <*errors.StatusError | 0xc421186a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Operation cannot be fulfilled on daemonsets.extensions \"daemon-set\": the object has been modified; please apply your changes to the latest version and try again",
            Reason: "Conflict",
            Details: {Name: "daemon-set", Group: "extensions", Kind: "daemonsets", Causes: nil, RetryAfterSeconds: 0},
            Code: 409,
        },
    }
    Operation cannot be fulfilled on daemonsets.extensions "daemon-set": the object has been modified; please apply your changes to the latest version and try again
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:254

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420eb45b0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  6 15:33:21.812: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Mar 6, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/2/
Multiple broken tests:

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-39-35.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-224.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-13.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-153.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-170.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-39-35.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-224.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-13.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-153.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-58-170.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  6 16:19:35.323: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:264
Expected error:
    <*errors.StatusError | 0xc4219d5080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Operation cannot be fulfilled on daemonsets.extensions \"daemon-set\": the object has been modified; please apply your changes to the latest version and try again",
            Reason: "Conflict",
            Details: {Name: "daemon-set", Group: "extensions", Kind: "daemonsets", Causes: nil, RetryAfterSeconds: 0},
            Code: 409,
        },
    }
    Operation cannot be fulfilled on daemonsets.extensions "daemon-set": the object has been modified; please apply your changes to the latest version and try again
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:254

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203fd640>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42148a030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-39-35.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-224.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-13.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-153.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-170.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-39-35.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-224.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-13.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-153.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-58-170.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421830030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  6 17:59:44.496: Node ip-172-20-44-224.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/3/
Multiple broken tests:

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  6 22:03:22.398: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203fd8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-33-29.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-153.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-55.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-63.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-25.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-33-29.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-153.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-55.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-63.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-25.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421726300>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4208662f0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc4203fd8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-33-29.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-153.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-55.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-63.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-25.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-33-29.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-153.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-55.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-63.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-25.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  6 20:21:25.746: Node ip-172-20-40-153.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/4/
Multiple broken tests:

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-37-36.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-217.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-54-200.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-226.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-47.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-37-36.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-217.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-54-200.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-58-226.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-47.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203fb300>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  7 01:43:52.926: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420f5a310>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4212a6050>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-58-226.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-47.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-36.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-217.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-54-200.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-58-226.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-47.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-37-36.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-217.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-54-200.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  7 01:24:05.838: Node ip-172-20-48-217.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/5/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420b8e030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-56-229.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-14.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-34-154.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-46-199.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-10.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-56-229.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-58-14.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-34-154.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-46-199.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-48-10.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
.............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-34-154.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-46-199.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-10.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-56-229.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-14.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-34-154.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-46-199.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-48-10.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-56-229.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-58-14.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  7 04:40:27.385: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203fcb50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420e68030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  7 04:53:16.852: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  7 07:11:07.196: Node ip-172-20-58-14.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@calebamiles calebamiles modified the milestone: v1.6 Mar 7, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/6/
Multiple broken tests:

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  7 12:39:22.045: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203ee950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421676690>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  7 10:53:25.263: Node ip-172-20-35-121.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-55-231.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-35-121.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-125.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-4.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-182.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-55-231.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-35-121.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-39-125.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-42-4.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-182.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-35-121.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-125.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-4.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-182.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-231.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-35-121.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-39-125.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-42-4.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-182.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-55-231.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421e94520>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/7/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4215aa030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203fe310>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  7 13:08:58.465: Node ip-172-20-48-49.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-32-77.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-36-245.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-49.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-108.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-184.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-32-77.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-36-245.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-48-49.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-58-108.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-60-184.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
..........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  7 16:14:09.728: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421704030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-32-77.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-36-245.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-49.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-108.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-184.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-32-77.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-36-245.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-48-49.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-58-108.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-60-184.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/8/
Multiple broken tests:

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
..............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203d95b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4208bc030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  7 17:46:22.695: Node ip-172-20-47-249.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4215185f0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  7 20:16:04.596: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-48-112.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-46.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-117.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-111.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-249.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-48-112.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-44-46.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-45-117.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-47-111.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-47-249.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-44-46.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-117.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-111.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-249.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-112.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-44-46.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-45-117.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-47-111.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-47-249.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-48-112.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  7 20:23:07.764: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@spxtr spxtr removed their assignment Mar 8, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/9/
Multiple broken tests:

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  7 23:30:17.358: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42210c040>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  7 21:38:39.270: Node ip-172-20-58-247.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-40-144.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-178.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-56-87.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-247.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-66.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-40-144.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-47-178.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-56-87.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-58-247.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-62-66.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203fec10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc4203fec10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42210c040>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-47-178.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-56-87.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-247.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-66.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-144.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-47-178.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-56-87.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-58-247.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-62-66.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-40-144.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  7 23:07:25.632: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/10/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421456030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-33-110.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-34-90.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-38-226.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-148.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-146.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-33-110.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-34-90.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-38-226.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-148.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-52-146.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  8 03:25:34.780: Node ip-172-20-45-148.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4213505b0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-33-110.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-34-90.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-38-226.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-148.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-146.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-33-110.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-34-90.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-38-226.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-148.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-52-146.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/11/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-37-23.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-41-21.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-41-220.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-29.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-34-149.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-37-23.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-41-21.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-41-220.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-51-29.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-34-149.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
..............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  8 09:05:05.664: Node ip-172-20-37-23.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-34-149.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-23.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-41-21.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-41-220.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-29.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-34-149.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-37-23.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-41-21.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-41-220.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-51-29.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420898450>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420850a10>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  8 07:59:01.010: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/12/
Multiple broken tests:

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  8 12:57:40.492: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421de2040>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4218aa2e0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-50-61.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-99.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-254.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-41-32.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-43-62.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-50-61.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-55-99.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-254.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-41-32.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-43-62.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  8 10:33:26.008: Node ip-172-20-41-32.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203c9f10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  8 11:38:48.194: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-43-62.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-50-61.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-99.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-254.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-41-32.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-43-62.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-50-61.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-55-99.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-254.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-41-32.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/13/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  8 14:01:50.504: Node ip-172-20-34-235.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-34-235.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-240.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-253.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-115.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-174.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-34-235.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-240.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-253.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-63-115.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-63-174.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4211ae2f0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-44-253.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-115.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-174.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-34-235.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-240.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-44-253.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-63-115.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-63-174.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-34-235.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-240.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421270030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.errorString | 0xc4203fd440>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:287

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
.............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc4203fd440>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/14/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:293
Expected error:
    <*errors.StatusError | 0xc421b0ad80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Operation cannot be fulfilled on daemonsets.extensions \"daemon-set\": the object has been modified; please apply your changes to the latest version and try again",
            Reason: "Conflict",
            Details: {Name: "daemon-set", Group: "extensions", Kind: "daemonsets", Causes: nil, RetryAfterSeconds: 0},
            Code: 409,
        },
    }
    Operation cannot be fulfilled on daemonsets.extensions "daemon-set": the object has been modified; please apply your changes to the latest version and try again
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:283

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-40-156.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-230.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-176.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-56-174.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-57-85.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-40-156.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-49-230.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-55-176.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-56-174.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-57-85.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42176a040>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-40-156.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-230.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-176.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-56-174.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-57-85.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-40-156.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-49-230.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-55-176.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-56-174.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-57-85.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  8 21:06:19.348: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4210b0030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  8 19:55:58.603: Node ip-172-20-55-176.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/15/
Multiple broken tests:

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  8 23:55:56.027: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421382050>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-45-175.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-160.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-22.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-108.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-143.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-45-175.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-48-160.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-49-22.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-108.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-61-143.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:391
error waiting for daemon pod template generation to be 1
Expected error:
    <*errors.errorString | 0xc420414ff0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:386

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  8 23:30:14.545: Node ip-172-20-45-175.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  8 23:45:57.073: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc420414ff0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-49-22.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-108.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-143.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-175.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-160.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-49-22.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-108.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-61-143.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-45-175.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-48-160.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421382030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/16/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42016a090>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-45-38.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-132.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-140.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-34-177.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-34-39.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-45-38.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-47-132.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-60-140.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-34-177.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-34-39.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  9 04:13:56.518: Node ip-172-20-34-39.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  9 04:25:48.455: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
.................failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-34-177.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-34-39.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-38.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-132.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-140.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-34-177.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-34-39.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-45-38.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-47-132.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-60-140.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:317
Expected error:
    <*errors.errorString | 0xc4203d9b00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:307

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420cae030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/17/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42139a040>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  9 09:15:36.672: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...........failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  9 07:27:42.823: Node ip-172-20-51-86.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42186a030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-44-52.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-77.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-85.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-86.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-130.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-44-52.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-45-77.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-51-85.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-51-86.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-52-130.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:317
Expected error:
    <*errors.errorString | 0xc4203d62a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-51-86.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-130.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-52.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-77.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-85.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-51-86.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-52-130.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-44-52.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-45-77.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-51-85.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/18/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-38-64.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-220.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-46-27.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-86.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-32-246.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-38-64.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-220.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-46-27.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-49-86.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-32-246.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  9 15:38:30.996: Node ip-172-20-38-64.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-32-246.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-38-64.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-220.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-46-27.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-86.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-32-246.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-38-64.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-220.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-46-27.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-49-86.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc423870050>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
...............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420fe8360>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:317
Expected error:
    <*errors.errorString | 0xc4203e4570>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:307

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  9 13:49:51.078: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/19/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  9 16:24:17.206: Node ip-172-20-60-152.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42181a030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42181a620>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-45-14.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-67.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-177.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-152.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-36-188.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-45-14.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-49-67.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-55-177.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-60-152.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-36-188.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-36-188.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-14.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-67.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-177.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-152.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-36-188.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-14.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-49-67.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-55-177.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-60-152.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/20/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-42-43.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-114.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-10.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-219.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-89.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-42-43.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-114.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-10.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-219.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-55-89.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4221ae4c0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar  9 22:35:51.492: Node ip-172-20-42-43.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
............failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4213c6660>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-42-43.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-114.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-10.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-219.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-89.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-42-43.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-114.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-10.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-219.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-55-89.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/21/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:481
Mar 10 01:23:51.431: Node ip-172-20-53-215.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:69

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-52-180.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-215.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-166.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-32-170.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-95.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-52-180.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-215.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-58-166.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-32-170.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-37-95.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42086e5a0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331

Issues about this test specifically: #37479

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:275
Expected error:
    <*errors.StatusError | 0xc421336a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Operation cannot be fulfilled on daemonsets.extensions \"daemon-set\": the object has been modified; please apply your changes to the latest version and try again",
            Reason: "Conflict",
            Details: {Name: "daemon-set", Group: "extensions", Kind: "daemonsets", Causes: nil, RetryAfterSeconds: 0},
            Code: 409,
        },
    }
    Operation cannot be fulfilled on daemonsets.extensions "daemon-set": the object has been modified; please apply your changes to the latest version and try again
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:260

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar 10 00:19:25.893: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: install_gcloud {PRE-SETUP}


      
      

Your current Cloud SDK version is: 146.0.0
Installing components from version: 146.0.0

+----------------------------------------------------+
|        These components will be installed.         |
+-----------------------------+------------+---------+
|             Name            |  Version   |   Size  |
+-----------------------------+------------+---------+
| gcloud-deps (Linux, x86_64) | 2017.02.21 | 3.9 MiB |
+-----------------------------+------------+---------+

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

#============================================================#
#= Creating update staging area                             =#
#============================================================#
#= Installing: gcloud-deps (Linux, x86_64)                  =#
#============================================================#
#= Creating backup and activating new installation          =#
#============================================================#

Performing post processing steps...
..................failed.
WARNING: Post processing failed.  Run `gcloud info --show-log` to view the failures.

Update done!

Traceback (most recent call last):
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 206, in <module>
    main()
  File "//google-cloud-sdk/bin/bootstrapping/install.py", line 191, in main
    sdk_root=bootstrapping.SDK_ROOT,
TypeError: UpdateRC() got an unexpected keyword argument 'completion_update'
      
      

Issues about this test specifically: #32669 #36416 #36842 #40203 #42227

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420784360>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:266

Issues about this test specifically: #37259

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-32-170.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-95.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-180.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-215.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-166.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-32-170.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-37-95.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-52-180.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-215.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-58-166.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/354/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc42029c5c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 30 08:04:20.292: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-47-210.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-21.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-101.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-239.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-33-249.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-47-210.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-48-21.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-101.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-58-239.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-33-249.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-33-249.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-210.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-21.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-101.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-239.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-33-249.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-47-210.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-48-21.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-101.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-58-239.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
Apr 30 07:27:30.374: Node ip-172-20-58-239.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422ea9660>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-33-249.ec2.internal
to equal
    <string>: ip-172-20-58-239.ec2.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42070e030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/355/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-35-35.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-36-145.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-43-78.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-198.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-33.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-35-35.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-36-145.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-43-78.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-198.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-55-33.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-35-35.us-west-1.compute.internal
to equal
    <string>: ip-172-20-55-33.us-west-1.compute.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42160e030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-35-35.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-36-145.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-43-78.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-198.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-33.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-35-35.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-36-145.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-43-78.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-198.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-55-33.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202bcc70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 30 10:39:39.389: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42198c490>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
Apr 30 11:52:16.911: Node ip-172-20-55-33.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/356/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42036b4f0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-40-156.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-31.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-109.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-57-216.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-171.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-40-156.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-47-31.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-109.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-57-216.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-60-171.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202e0b00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
Apr 30 13:13:23.420: Node ip-172-20-57-216.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0b00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0b00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0b00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42165cc10>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0b00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-47-31.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-109.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-57-216.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-171.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-156.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-47-31.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-109.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-57-216.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-60-171.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-156.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202e0b00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/357/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-39-198.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-142.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-128.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-14.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-49.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-39-198.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-142.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-49-128.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-51-14.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-51-49.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420ff6050>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-39-198.us-west-2.compute.internal
to equal
    <string>: ip-172-20-51-14.us-west-2.compute.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: TearDown {e2e.go}

error during /workspace/kops delete cluster e2e-kops-aws-serial.test-aws.k8s.io --yes: exit status 1

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc42029e000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 30 18:46:32.546: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
Apr 30 18:36:33.795: Node ip-172-20-40-142.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4221de030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-40-142.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-128.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-14.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-49.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-198.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-40-142.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-49-128.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-51-14.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-51-49.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-39-198.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/358/
Multiple broken tests:

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420255c50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
Apr 30 21:29:45.881: Node ip-172-20-62-165.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-43-80.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-191.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-165.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-33-92.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-139.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-43-80.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-191.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-62-165.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-33-92.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-39-139.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  1 00:26:15.020: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420255c50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31407

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420255c50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35277

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420d38b50>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc420255c50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420255c50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29516

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-33-92.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-139.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-43-80.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-191.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-165.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-33-92.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-39-139.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-43-80.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-191.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-62-165.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-33-92.ec2.internal
to equal
    <string>: ip-172-20-62-165.ec2.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4215f2030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420255c50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/359/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-37-202.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-38-199.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-229.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-46-81.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-150.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-37-202.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-38-199.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-42-229.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-46-81.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-63-150.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
Expected error:
    <*errors.errorString | 0xc4221767e0>: {
        s: "resource name may not be empty",
    }
    resource name may not be empty
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:430

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-38-199.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-229.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-46-81.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-150.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-202.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-38-199.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-42-229.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-46-81.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-63-150.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-37-202.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  1 03:54:18.962: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202e0da0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4225c8030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421f42330>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/360/
Multiple broken tests:

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc4a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-47-14.us-west-1.compute.internal
to equal
    <string>: ip-172-20-63-35.us-west-1.compute.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-44-193.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-14.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-227.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-54-88.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-35.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-44-193.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-47-14.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-227.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-54-88.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-63-35.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  1 06:15:14.720: Node ip-172-20-47-14.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Daemon set [Serial] should retry creating failed daemon pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202bc4a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42193a050>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202bc4a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  1 04:58:37.137: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421b8c8a0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-54-88.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-35.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-193.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-14.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-227.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-54-88.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-63-35.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-193.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-47-14.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-227.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/361/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421ec4fe0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  1 10:08:39.195: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422204610>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  1 11:19:10.902: Node ip-172-20-37-155.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-37-155.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-235.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-43-245.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-56-136.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-57-216.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-37-155.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-42-235.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-43-245.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-56-136.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-57-216.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-42-235.ec2.internal
to equal
    <string>: ip-172-20-56-136.ec2.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202bd030>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-43-245.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-56-136.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-57-216.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-155.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-235.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-43-245.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-56-136.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-57-216.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-37-155.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-42-235.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/362/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-36-18.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-82.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-160.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-50-138.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-33-44.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-36-18.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-37-82.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-42-160.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-50-138.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-33-44.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  1 12:53:54.432: Node ip-172-20-37-82.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc420255de0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421052600>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  1 14:03:56.065: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4200164d0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-36-18.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-82.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-160.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-50-138.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-33-44.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-36-18.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-37-82.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-42-160.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-50-138.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-33-44.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/363/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202af930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4220c8930>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-54-62.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-32-90.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-36-229.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-172.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-27.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-54-62.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-32-90.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-36-229.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-49-172.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-27.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202af930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202af930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202af930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-36-229.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-172.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-27.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-54-62.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-32-90.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-36-229.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-49-172.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-27.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-54-62.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-32-90.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202af930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202af930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4227d44d0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202af930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29512

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202af930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc4202af930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31918

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/364/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  1 21:29:28.788: Node ip-172-20-53-78.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-40-98.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-50.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-5.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-78.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-74.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-40-98.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-50.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-5.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-78.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-62-74.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-40-98.us-west-2.compute.internal
to equal
    <string>: ip-172-20-62-74.us-west-2.compute.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-40-98.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-50.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-5.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-78.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-74.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-40-98.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-50.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-5.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-78.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-62-74.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202e0760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  1 21:54:47.348: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421f9a390>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4215c1280>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/365/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-36-157.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-28.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-35.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-43-87.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-213.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-36-157.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-28.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-35.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-43-87.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-58-213.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  2 02:46:00.609: Node ip-172-20-58-213.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202bc100>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  2 00:44:45.573: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421376070>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421b363a0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-36-157.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-28.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-35.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-43-87.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-213.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-36-157.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-28.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-40-35.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-43-87.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-58-213.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-40-35.us-west-2.compute.internal
to equal
    <string>: ip-172-20-58-213.us-west-2.compute.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/366/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  2 05:34:43.056: Node ip-172-20-61-249.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42044c030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bdc30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202bdc30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bdc30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bdc30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-38-217.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-197.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-122.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-223.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-249.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-38-217.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-51-197.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-122.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-60-223.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-249.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421999650>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-60-223.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-249.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-38-217.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-197.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-122.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-60-223.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-249.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-38-217.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-51-197.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-122.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bdc30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bdc30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/367/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-37-253.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-41-27.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-201.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-13.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-9.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-37-253.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-41-27.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-49-201.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-13.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-60-9.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202e0e60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  2 08:34:45.877: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42107ccb0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-37-253.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-41-27.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-201.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-13.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-9.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-37-253.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-41-27.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-49-201.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-53-13.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-60-9.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc423ddb1d0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  2 08:55:52.270: Node ip-172-20-60-9.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-37-253.us-west-1.compute.internal
to equal
    <string>: ip-172-20-60-9.us-west-1.compute.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/368/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #26744 #26929 #38552 #45211

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  2 10:57:10.408: Node ip-172-20-61-235.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #29512

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #28853 #31585

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #37479

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-47-156.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-235.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-35-147.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-208.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-62.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-47-156.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-235.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-35-147.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-208.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-62.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-35-147.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-208.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-45-62.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-156.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-235.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-35-147.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-208.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-45-62.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-47-156.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-235.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4229d2a80>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/369/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-61-28.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-35-127.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-3.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-23.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-170.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-61-28.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-35-127.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-37-3.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-23.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-170.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  2 16:14:41.287: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc423c92040>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202e1af0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  2 17:12:11.398: Node ip-172-20-61-28.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-35-127.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-3.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-23.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-170.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-28.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-35-127.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-37-3.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-23.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-170.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-28.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421730030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/370/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #31428

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  2 21:08:30.150: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-33-212.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-162.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-157.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-54-120.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-151.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-33-212.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-37-162.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-157.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-54-120.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-62-151.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-33-212.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-37-162.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-157.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-54-120.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-151.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-33-212.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-37-162.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-157.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-54-120.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-62-151.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*url.Error | 0xc4219367e0>: {
        Op: "Get",
        URL: "https://api.e2e-kops-aws-serial.test-aws.k8s.io/api/v1/namespaces/e2e-tests-network-partition-nr72k/pods?labelSelector=name%3Dmy-hostname-net",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 54, 193, 3, 198],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://api.e2e-kops-aws-serial.test-aws.k8s.io/api/v1/namespaces/e2e-tests-network-partition-nr72k/pods?labelSelector=name%3Dmy-hostname-net: dial tcp 54.193.3.198:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #36457

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #26744 #26929 #38552 #45211

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #28019

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #27957

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4212f6040>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  2 20:48:35.897: Node ip-172-20-54-120.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027af00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/371/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42104a030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202e02b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-56-252.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-35-136.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-249.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-36.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-149.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-56-252.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-35-136.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-42-249.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-49-36.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-52-149.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422bb4620>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-35-136.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-249.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-49-36.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-149.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-56-252.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-35-136.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-42-249.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-49-36.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-52-149.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-56-252.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202e02b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #29444

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202e02b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  3 01:25:05.046: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202e02b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #31407

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202e02b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  3 00:37:12.253: Node ip-172-20-35-136.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202e02b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/372/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  3 02:16:54.196: Node ip-172-20-42-69.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  3 04:13:30.043: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420bfa030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202bc6d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421086350>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-36-104.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-69.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-221.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-129.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-54-54.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-36-104.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-42-69.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-221.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-129.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-54-54.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-36-104.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-69.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-221.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-129.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-54-54.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-36-104.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-42-69.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-44-221.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-129.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-54-54.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-42-69.us-west-2.compute.internal
to equal
    <string>: ip-172-20-54-54.us-west-2.compute.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/373/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42027a0a0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-39-170.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-50.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-26.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-35-36.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-163.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-39-170.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-39-50.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-60-26.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-35-36.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-39-163.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202fcb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #35277

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-35-36.us-west-1.compute.internal
to equal
    <string>: ip-172-20-60-26.us-west-1.compute.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202fcb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] NoExecuteTaintManager [Serial] removing taint cancels eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202fcb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202fcb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #28071

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42167e320>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  3 07:31:29.147: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202fcb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202fcb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202fcb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202fcb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202fcb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-35-36.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-163.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-170.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-50.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-60-26.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-35-36.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-39-163.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-39-170.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-39-50.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-60-26.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/374/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #29516

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420bba3d0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-32-52.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-33-55.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-35-69.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-36.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-6.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-32-52.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-33-55.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-35-69.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-39-36.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-6.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420ea2910>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #28019

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-35-69.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-39-36.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-6.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-32-52.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-33-55.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-35-69.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-39-36.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-6.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-32-52.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-33-55.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #36950

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-35-69.ec2.internal
to equal
    <string>: ip-172-20-39-36.ec2.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42027be20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #36457

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/375/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-33-161.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-135.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-186.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-115.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-157.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-33-161.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-135.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-186.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-51-115.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-157.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  3 17:06:00.870: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  3 15:16:32.530: Node ip-172-20-48-135.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202bd160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-33-161.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-135.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-186.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-115.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-61-157.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-33-161.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-135.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-186.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-51-115.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-61-157.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421586510>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #37373

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bd160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/376/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421908040>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  3 19:43:52.439: Node ip-172-20-52-228.us-west-2.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-48-134.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-50-34.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-6.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-228.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-57-111.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-48-134.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-50-34.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-51-6.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-52-228.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-57-111.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202ae0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-48-134.us-west-2.compute.internal
to equal
    <string>: ip-172-20-52-228.us-west-2.compute.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-52-228.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-57-111.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-48-134.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-50-34.us-west-2.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-6.us-west-2.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-52-228.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-57-111.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-48-134.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-50-34.us-west-2.compute.internal" is not ready yet, Resource usage on node "ip-172-20-51-6.us-west-2.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42127a120>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  3 18:25:19.030: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/377/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421384030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421384030>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc42028e350>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-57-177.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-34-24.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-91.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-166.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-54-170.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-57-177.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-34-24.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-47-91.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-52-166.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-54-170.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-34-24.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-91.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-166.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-54-170.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-57-177.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-34-24.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-47-91.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-52-166.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-54-170.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-57-177.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc42028e350>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

Issues about this test specifically: #35279

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  3 22:23:15.466: Node ip-172-20-34-24.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/378/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-55-120.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-56.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-221.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-240.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-44-46.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-55-120.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-58-56.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-62-221.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-63-240.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-44-46.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-58-56.ec2.internal
to equal
    <string>: ip-172-20-63-240.ec2.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42100eee0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420322040>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  4 03:15:37.286: Node ip-172-20-58-56.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-44-46.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-55-120.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-58-56.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-221.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-63-240.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-44-46.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-55-120.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-58-56.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-62-221.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-63-240.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202c31f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/379/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  4 04:53:42.095: Node ip-172-20-42-3.us-west-1.compute.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-42-3.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-234.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-52-61.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-59-65.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-193.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-42-3.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-47-234.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-52-61.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-59-65.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-62-193.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42127e620>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc42027a430>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-52-61.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-59-65.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-62-193.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-42-3.us-west-1.compute.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-47-234.us-west-1.compute.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-52-61.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-59-65.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-62-193.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-42-3.us-west-1.compute.internal" is not ready yet, Resource usage on node "ip-172-20-47-234.us-west-1.compute.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420f3e710>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-serial/380/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4202bca80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:923

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: ip-172-20-51-74.ec2.internal
to equal
    <string>: ip-172-20-53-216.ec2.internal
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:331
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42018a980>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:299

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-41-105.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-74.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-216.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-34-76.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-137.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-41-105.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-51-74.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-216.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-34-76.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-40-137.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:5, cap:8>: [
        {
            s: "Resource usage on node \"ip-172-20-34-76.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-40-137.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-41-105.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-51-74.ec2.internal\" is not ready yet",
        },
        {
            s: "Resource usage on node \"ip-172-20-53-216.ec2.internal\" is not ready yet",
        },
    ]
    [Resource usage on node "ip-172-20-34-76.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-40-137.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-41-105.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-51-74.ec2.internal" is not ready yet, Resource usage on node "ip-172-20-53-216.ec2.internal" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:449
May  4 09:53:50.113: Node ip-172-20-34-76.ec2.internal did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:856

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4223faae0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for -1 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:234

Issues about this test specifically: #37259

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Expected error:
    <*errors.errorString | 0xc4202bca80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:196

@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 81 days. It will be closed in 8 days (Jun 13, 2017).

cc @k8s-merge-robot @zmerlynn

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

@marun
Copy link
Contributor

marun commented Jun 14, 2017

This hasn't been active in 90 days. Closing.

@marun marun closed this as completed Jun 14, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

8 participants