Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new: broken test run #38476

Closed
k8s-github-robot opened this issue Dec 9, 2016 · 25 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/39/

Multiple broken tests:

Failed: TearDown {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 17 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-e5f6923b  n1-standard-2               2016-12-07T09:26:12.845-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-ff59fe72-grp  us-central1-a  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-ff59fe72-dmed  us-central1-a  n1-standard-2               10.240.0.2   104.154.116.74  RUNNING
+gke-bootstrap-e2e-default-pool-ff59fe72-evzp  us-central1-a  n1-standard-2               10.240.0.3   35.184.20.109   RUNNING
+gke-bootstrap-e2e-default-pool-ff59fe72-yqmk  us-central1-a  n1-standard-2               10.240.0.4   35.184.42.157   RUNNING
+gke-bootstrap-e2e-default-pool-ff59fe72-dmed                     us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-ff59fe72-evzp                     us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-ff59fe72-yqmk                     us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-6a467d86-361e3cc0-bca3-11e6-9ce6-42010af0002f  bootstrap-e2e  10.96.1.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-ff59fe72-dmed  1000
+gke-bootstrap-e2e-6a467d86-4a287011-bca1-11e6-905d-42010af0002f  bootstrap-e2e  10.96.2.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-ff59fe72-yqmk  1000
+gke-bootstrap-e2e-6a467d86-4cfe0dd8-bcaa-11e6-9ce6-42010af0002f  bootstrap-e2e  10.96.3.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-ff59fe72-evzp  1000
+gke-bootstrap-e2e-6a467d86-all  bootstrap-e2e  10.96.0.0/14      udp,icmp,esp,ah,sctp,tcp
+gke-bootstrap-e2e-6a467d86-ssh  bootstrap-e2e  35.184.48.130/32  tcp:22                                  gke-bootstrap-e2e-6a467d86-node
+gke-bootstrap-e2e-6a467d86-vms  bootstrap-e2e  10.240.0.0/16     tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-6a467d86-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 10h0m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Dec 9, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/48/

Multiple broken tests:

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: kubectl version {e2e.go}

exit status 1

Issues about this test specifically: #34378

Failed: DiffResources {e2e.go}

Error: 5 leaked resources
+k8s-fw-a2a1c41afbf1011e68bdf42010af0001  bootstrap-e2e  0.0.0.0/0     tcp:80                                  gke-bootstrap-e2e-3c8d9b10-node
+NAME                              REGION       IP_ADDRESS       IP_PROTOCOL  TARGET
+a2a1c41afbf1011e68bdf42010af0001  us-central1  104.154.195.226  TCP          us-central1/targetPools/a2a1c41afbf1011e68bdf42010af0001
+NAME                              REGION       SESSION_AFFINITY  BACKUP  HEALTH_CHECKS
+a2a1c41afbf1011e68bdf42010af0001  us-central1

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc420cf2100>: {
        s: "error running gcloud [container clusters --project=gke-up-g1-4-c1-5-up-clu-n --zone=us-central1-a upgrade bootstrap-e2e --cluster-version=1.5.0-beta.3.6+b930bfda10fba6 --quiet --image-type=container_vm]; got error exit status 1, stdout \"\", stderr \"Upgrading bootstrap-e2e...\\n....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\\n name: u'operation-1481398938542-886a9680'\\n operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)\\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/451169633904/zones/us-central1-a/operations/operation-1481398938542-886a9680'\\n status: StatusValueValuesEnum(DONE, 3)\\n statusMessage: u'cloud-kubernetes::UNKNOWN: Get https://104.197.8.181/api/v1/nodes/gke-bootstrap-e2e-default-pool-dbd52ec8-dbjc: harpoon unreachable error UNREACHABLE_CONNECT_REFUSED\\\\ngoroutine 1908809 [running]:\\\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc43601fa70, 0x8a, 0x1, 0x10)\\\\n\\\\tcloud/kubernetes/common/errors.go:627 +0x22f\\\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2ba8be0, 0xc4237b0bd0, 0xc423741140)\\\\n\\\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42d40d0a0, 0x1, 0x1, 0x0, 0x1)\\\\n\\\\tcloud/kubernetes/common/errors.go:852 +0x12b\\\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42d7f2d00, 0xc424a25500, 0xc4296da3c0, 0x3, 0x4, 0x2, 0x4)\\\\n\\\\tcloud/kubernetes/common/call.go:130 +0x608\\\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42a93bd80, 0x7f882a049688, 0xc4261893e0, 0xc42f8ea4e0, 0xc42b5dd680, 0xc42155b500, 0xc424a25500, 0xc42b862900, 0xb3, 0xc422a9ef80, ...)\\\\n\\\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc428dd1bc0, 0x7f882a049688, 0xc4261893e0, 0xc424a25500, 0x7f8829c162b0, 0xc42155b490, 0xc42f8ea4e0, 0xc42b5dd680, 0xc42155b500, 0xc42b862900, ...)\\\\n\\\\tcloud/kubernetes/server/deploy.go:1844 +0xdc\\\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc428dd1bc0, 0x7f882a049688, 0xc4261893e0, 0xc424a25500, 0x7f8829c162b0, 0xc42155b490, 0xc42962dc80, 0xc42f8ea4e0, 0xc42b3ec080, 0xc422a9ef80, ...)\\\\n\\\\tcloud/kubernetes/server/deploy.go:1781 +0xb5e\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, 0x3, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1206 +0x3e5\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, 0x0, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1077 +0x108\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc400000002, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:951 +0x3d4\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0x2bedd00, 0xc42a956cc0, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1957 +0xca\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc424a25440, 0xc42a956e80, 0x2bedd00, 0xc42a956cc0, 0xc425340194, 0xc, 0xc400000002, 0xc4253704d0, 0xc4295da850, 0x7f882a049688, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1949 +0x2fd\\\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\\\n\\\\tcloud/kubernetes/server/server.go:1951 +0xc44\\\\n'\\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/451169633904/zones/us-central1-a/clusters/bootstrap-e2e/nodePools/default-pool'\\n zone: u'us-central1-a'>] finished with error: cloud-kubernetes::UNKNOWN: Get https://104.197.8.181/api/v1/nodes/gke-bootstrap-e2e-default-pool-dbd52ec8-dbjc: harpoon unreachable error UNREACHABLE_CONNECT_REFUSED\\ngoroutine 1908809 [running]:\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc43601fa70, 0x8a, 0x1, 0x10)\\n\\tcloud/kubernetes/common/errors.go:627 +0x22f\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2ba8be0, 0xc4237b0bd0, 0xc423741140)\\n\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42d40d0a0, 0x1, 0x1, 0x0, 0x1)\\n\\tcloud/kubernetes/common/errors.go:852 +0x12b\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42d7f2d00, 0xc424a25500, 0xc4296da3c0, 0x3, 0x4, 0x2, 0x4)\\n\\tcloud/kubernetes/common/call.go:130 +0x608\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42a93bd80, 0x7f882a049688, 0xc4261893e0, 0xc42f8ea4e0, 0xc42b5dd680, 0xc42155b500, 0xc424a25500, 0xc42b862900, 0xb3, 0xc422a9ef80, ...)\\n\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc428dd1bc0, 0x7f882a049688, 0xc4261893e0, 0xc424a25500, 0x7f8829c162b0, 0xc42155b490, 0xc42f8ea4e0, 0xc42b5dd680, 0xc42155b500, 0xc42b862900, ...)\\n\\tcloud/kubernetes/server/deploy.go:1844 +0xdc\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc428dd1bc0, 0x7f882a049688, 0xc4261893e0, 0xc424a25500, 0x7f8829c162b0, 0xc42155b490, 0xc42962dc80, 0xc42f8ea4e0, 0xc42b3ec080, 0xc422a9ef80, ...)\\n\\tcloud/kubernetes/server/deploy.go:1781 +0xb5e\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, 0x3, ...)\\n\\tcloud/kubernetes/server/server.go:1206 +0x3e5\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, 0x0, ...)\\n\\tcloud/kubernetes/server/server.go:1077 +0x108\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc400000002, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, ...)\\n\\tcloud/kubernetes/server/server.go:951 +0x3d4\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0x2bedd00, 0xc42a956cc0, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2, ...)\\n\\tcloud/kubernetes/server/server.go:1957 +0xca\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc424a25440, 0xc42a956e80, 0x2bedd00, 0xc42a956cc0, 0xc425340194, 0xc, 0xc400000002, 0xc4253704d0, 0xc4295da850, 0x7f882a049688, ...)\\n\\tcloud/kubernetes/server/server.go:1949 +0x2fd\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\n\\tcloud/kubernetes/server/server.go:1951 +0xc44\\n\\n\"",
    }
    error running gcloud [container clusters --project=gke-up-g1-4-c1-5-up-clu-n --zone=us-central1-a upgrade bootstrap-e2e --cluster-version=1.5.0-beta.3.6+b930bfda10fba6 --quiet --image-type=container_vm]; got error exit status 1, stdout "", stderr "Upgrading bootstrap-e2e...\n....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1481398938542-886a9680'\n operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/451169633904/zones/us-central1-a/operations/operation-1481398938542-886a9680'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'cloud-kubernetes::UNKNOWN: Get https://104.197.8.181/api/v1/nodes/gke-bootstrap-e2e-default-pool-dbd52ec8-dbjc: harpoon unreachable error UNREACHABLE_CONNECT_REFUSED\\ngoroutine 1908809 [running]:\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc43601fa70, 0x8a, 0x1, 0x10)\\n\\tcloud/kubernetes/common/errors.go:627 +0x22f\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2ba8be0, 0xc4237b0bd0, 0xc423741140)\\n\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42d40d0a0, 0x1, 0x1, 0x0, 0x1)\\n\\tcloud/kubernetes/common/errors.go:852 +0x12b\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42d7f2d00, 0xc424a25500, 0xc4296da3c0, 0x3, 0x4, 0x2, 0x4)\\n\\tcloud/kubernetes/common/call.go:130 +0x608\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42a93bd80, 0x7f882a049688, 0xc4261893e0, 0xc42f8ea4e0, 0xc42b5dd680, 0xc42155b500, 0xc424a25500, 0xc42b862900, 0xb3, 0xc422a9ef80, ...)\\n\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc428dd1bc0, 0x7f882a049688, 0xc4261893e0, 0xc424a25500, 0x7f8829c162b0, 0xc42155b490, 0xc42f8ea4e0, 0xc42b5dd680, 0xc42155b500, 0xc42b862900, ...)\\n\\tcloud/kubernetes/server/deploy.go:1844 +0xdc\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc428dd1bc0, 0x7f882a049688, 0xc4261893e0, 0xc424a25500, 0x7f8829c162b0, 0xc42155b490, 0xc42962dc80, 0xc42f8ea4e0, 0xc42b3ec080, 0xc422a9ef80, ...)\\n\\tcloud/kubernetes/server/deploy.go:1781 +0xb5e\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, 0x3, ...)\\n\\tcloud/kubernetes/server/server.go:1206 +0x3e5\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, 0x0, ...)\\n\\tcloud/kubernetes/server/server.go:1077 +0x108\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc400000002, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, ...)\\n\\tcloud/kubernetes/server/server.go:951 +0x3d4\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0x2bedd00, 0xc42a956cc0, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2, ...)\\n\\tcloud/kubernetes/server/server.go:1957 +0xca\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc424a25440, 0xc42a956e80, 0x2bedd00, 0xc42a956cc0, 0xc425340194, 0xc, 0xc400000002, 0xc4253704d0, 0xc4295da850, 0x7f882a049688, ...)\\n\\tcloud/kubernetes/server/server.go:1949 +0x2fd\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\n\\tcloud/kubernetes/server/server.go:1951 +0xc44\\n'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/451169633904/zones/us-central1-a/clusters/bootstrap-e2e/nodePools/default-pool'\n zone: u'us-central1-a'>] finished with error: cloud-kubernetes::UNKNOWN: Get https://104.197.8.181/api/v1/nodes/gke-bootstrap-e2e-default-pool-dbd52ec8-dbjc: harpoon unreachable error UNREACHABLE_CONNECT_REFUSED\ngoroutine 1908809 [running]:\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc43601fa70, 0x8a, 0x1, 0x10)\n\tcloud/kubernetes/common/errors.go:627 +0x22f\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2ba8be0, 0xc4237b0bd0, 0xc423741140)\n\tcloud/kubernetes/common/errors.go:681 +0x1ac\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42d40d0a0, 0x1, 0x1, 0x0, 0x1)\n\tcloud/kubernetes/common/errors.go:852 +0x12b\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42d7f2d00, 0xc424a25500, 0xc4296da3c0, 0x3, 0x4, 0x2, 0x4)\n\tcloud/kubernetes/common/call.go:130 +0x608\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42a93bd80, 0x7f882a049688, 0xc4261893e0, 0xc42f8ea4e0, 0xc42b5dd680, 0xc42155b500, 0xc424a25500, 0xc42b862900, 0xb3, 0xc422a9ef80, ...)\n\tcloud/kubernetes/server/updater/updater.go:70 +0x693\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc428dd1bc0, 0x7f882a049688, 0xc4261893e0, 0xc424a25500, 0x7f8829c162b0, 0xc42155b490, 0xc42f8ea4e0, 0xc42b5dd680, 0xc42155b500, 0xc42b862900, ...)\n\tcloud/kubernetes/server/deploy.go:1844 +0xdc\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc428dd1bc0, 0x7f882a049688, 0xc4261893e0, 0xc424a25500, 0x7f8829c162b0, 0xc42155b490, 0xc42962dc80, 0xc42f8ea4e0, 0xc42b3ec080, 0xc422a9ef80, ...)\n\tcloud/kubernetes/server/deploy.go:1781 +0xb5e\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, 0x3, ...)\n\tcloud/kubernetes/server/server.go:1206 +0x3e5\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, 0x0, ...)\n\tcloud/kubernetes/server/server.go:1077 +0x108\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0xc400000002, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2bedd00, 0xc42a956cc0, ...)\n\tcloud/kubernetes/server/server.go:951 +0x3d4\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc4295da850, 0x7f882a049688, 0xc42f0dac60, 0xc424a25500, 0x2bedd00, 0xc42a956cc0, 0xc42f8ea4e0, 0xc42b3ec080, 0xc424ae3620, 0x2, ...)\n\tcloud/kubernetes/server/server.go:1957 +0xca\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc424a25440, 0xc42a956e80, 0x2bedd00, 0xc42a956cc0, 0xc425340194, 0xc, 0xc400000002, 0xc4253704d0, 0xc4295da850, 0x7f882a049688, ...)\n\tcloud/kubernetes/server/server.go:1949 +0x2fd\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\n\tcloud/kubernetes/server/server.go:1951 +0xc44\n\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:93

Issues about this test specifically: #38172

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/61/

Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc422db99f0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 3, 99],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.3.99:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422945000>: {
        s: "Namespace e2e-tests-services-pz8rp is active",
    }
    Namespace e2e-tests-services-pz8rp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422999c80>: {
        s: "Namespace e2e-tests-services-pz8rp is active",
    }
    Namespace e2e-tests-services-pz8rp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422701e30>: {
        s: "Namespace e2e-tests-services-pz8rp is active",
    }
    Namespace e2e-tests-services-pz8rp is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/64/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ed3090>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ebc940>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:162
Expected error:
    <*errors.errorString | 0xc422462840>: {
        s: "expected pod \"downwardapi-volume-d20b68ec-c319-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'downwardapi-volume-d20b68ec-c319-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-d20b68ec-c319-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'downwardapi-volume-d20b68ec-c319-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36694

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422ba7880>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc4217311f0>: {
        s: "expected pod \"downwardapi-volume-95cbb9eb-c30c-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'downwardapi-volume-95cbb9eb-c30c-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-95cbb9eb-c30c-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'downwardapi-volume-95cbb9eb-c30c-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37423

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224f52c0>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:189
Expected error:
    <*errors.errorString | 0xc425103010>: {
        s: "expected pod \"downwardapi-volume-3a2b49fc-c34b-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'downwardapi-volume-3a2b49fc-c34b-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-3a2b49fc-c34b-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'downwardapi-volume-3a2b49fc-c34b-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124
Expected error:
    <*errors.errorString | 0xc4203acd60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422e6d580>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422e6c0f0>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
Expected error:
    <*errors.errorString | 0xc422f87b00>: {
        s: "expected pod \"pod-secrets-b07bb749-c342-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'pod-secrets-b07bb749-c342-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-b07bb749-c342-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'pod-secrets-b07bb749-c342-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:154
Expected error:
    <*errors.errorString | 0xc4203acd60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422d84b00>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:203
Expected error:
    <*errors.errorString | 0xc42277fd00>: {
        s: "expected pod \"downwardapi-volume-fe78e259-c344-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'downwardapi-volume-fe78e259-c344-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-fe78e259-c344-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'downwardapi-volume-fe78e259-c344-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37531

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422962810>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221e5870>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc421a4ba20>: {
        s: "expected pod \"pod-77c23b1f-c334-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'pod-77c23b1f-c334-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-77c23b1f-c334-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'pod-77c23b1f-c334-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37500

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:180
Expected error:
    <*errors.errorString | 0xc420c487f0>: {
        s: "expected pod \"downwardapi-volume-bca6354d-c307-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'downwardapi-volume-bca6354d-c307-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-bca6354d-c307-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'downwardapi-volume-bca6354d-c307-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Dec 15 20:47:38.708: CPU usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-7d6de352-x8rx:
 container "runtime": expected 95th% usage < 0.500; got 0.688
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc422f14b20>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-7d6de352-x8rx gke-bootstrap-e2e-default-pool-7d6de352-x8rx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 21:21:18 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-15 21:21:18 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 21:21:18 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-7d6de352-x8rx gke-bootstrap-e2e-default-pool-7d6de352-x8rx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 21:21:18 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-15 21:21:18 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 21:21:18 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105
Expected error:
    <*errors.errorString | 0xc421ace020>: {
        s: "expected pod \"pod-4447af5e-c316-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'pod-4447af5e-c316-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-4447af5e-c316-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'pod-4447af5e-c316-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #26780

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:196
Expected error:
    <*errors.errorString | 0xc423a4cb30>: {
        s: "expected pod \"downwardapi-volume-a5f46467-c34e-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'downwardapi-volume-a5f46467-c34e-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-a5f46467-c34e-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'downwardapi-volume-a5f46467-c34e-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422ee67f0>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:37
Expected error:
    <*errors.errorString | 0xc4220a6be0>: {
        s: "expected pod \"pod-configmaps-e052b72a-c346-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'pod-configmaps-e052b72a-c346-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-e052b72a-c346-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'pod-configmaps-e052b72a-c346-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29052

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:58
Expected error:
    <*errors.errorString | 0xc422fae910>: {
        s: "expected pod \"downwardapi-volume-927f8550-c341-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'downwardapi-volume-927f8550-c341-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-927f8550-c341-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'downwardapi-volume-927f8550-c341-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36300

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213d0c90>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc42314e7c0>: {
        s: "expected pod \"pod-secrets-4ff9c059-c33c-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'pod-secrets-4ff9c059-c33c-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-4ff9c059-c33c-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'pod-secrets-4ff9c059-c33c-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29221

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc420fe1770>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 30, 36],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.30.36:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422ba7e90>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc4223c5e50>: {
        s: "expected pod \"pod-secrets-76aa4a17-c338-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'pod-secrets-76aa4a17-c338-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-76aa4a17-c338-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'pod-secrets-76aa4a17-c338-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4219f9f10>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-7d6de352-x8rx gke-bootstrap-e2e-default-pool-7d6de352-x8rx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:15:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:34:34 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:34:34 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-7d6de352-x8rx gke-bootstrap-e2e-default-pool-7d6de352-x8rx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:15:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:34:34 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:34:34 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422f005d0>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213e7bf0>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc42264a480>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Dec 15 19:29:21.699: CPU usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-7d6de352-x8rx:
 container "runtime": expected 95th% usage < 0.200; got 0.561
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4202cb2b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-7d6de352-x8rx gke-bootstrap-e2e-default-pool-7d6de352-x8rx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:15:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:34:34 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:34:34 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-7d6de352-x8rx gke-bootstrap-e2e-default-pool-7d6de352-x8rx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:15:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:34:34 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-15 12:34:34 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
Expected error:
    <*errors.errorString | 0xc422aca630>: {
        s: "expected pod \"pod-62b2956b-c33b-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'pod-62b2956b-c33b-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-62b2956b-c33b-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'pod-62b2956b-c33b-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37439

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc4203acd60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:171
Expected error:
    <*errors.errorString | 0xc421acf770>: {
        s: "expected pod \"downwardapi-volume-e522ddaa-c318-11e6-8b5c-0242ac110007\" success: gave up waiting for pod 'downwardapi-volume-e522ddaa-c318-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-e522ddaa-c318-11e6-8b5c-0242ac110007" success: gave up waiting for pod 'downwardapi-volume-e522ddaa-c318-11e6-8b5c-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422d5f410>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422ba78e0>: {
        s: "Namespace e2e-tests-services-xt2nz is active",
    }
    Namespace e2e-tests-services-xt2nz is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
    <*errors.errorString | 0xc422f86200>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32122 #38040

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/65/

Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203c94e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Dec 16 00:31:19.223: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421c22010>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:15, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63617476495, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63617476495, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:15, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63617476495, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63617476495, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1098
Expected error:
    <*errors.errorString | 0xc421cf8150>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #26172

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc4203c94e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:66
Expected error:
    <*errors.errorString | 0xc4203c94e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:283

Issues about this test specifically: #37144

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/67/

Multiple broken tests:

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:71
Expected error:
    <*errors.StatusError | 0xc4214ba280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Expected error:
    <*errors.StatusError | 0xc42172e480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:391

Issues about this test specifically: #37373

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
The deployment that holds the oldest selector shouldn't have the overlapping annotation
Expected error:
    <*errors.errorString | 0xc4203ac990>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1225

Issues about this test specifically: #31502 #32947 #38646

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.StatusError | 0xc4218d0d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #32644

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.StatusError | 0xc421f0a080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #32639

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:51
Expected error:
    <*errors.errorString | 0xc420870bd0>: {
        s: "rc manager never added the failure condition for rc \"condition-test\": []api.ReplicationControllerCondition(nil)",
    }
    rc manager never added the failure condition for rc "condition-test": []api.ReplicationControllerCondition(nil)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:206

Issues about this test specifically: #37027

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:92
Expected error:
    <*errors.errorString | 0xc42318f490>: {
        s: "rs controller never added the failure condition for replica set \"condition-test\": []extensions.ReplicaSetCondition(nil)",
    }
    rs controller never added the failure condition for replica set "condition-test": []extensions.ReplicaSetCondition(nil)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:221

Issues about this test specifically: #36554

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Expected error:
    <*errors.StatusError | 0xc422fc0e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:114

Issues about this test specifically: #37361 #37919

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:475
Pod was not deleted during network partition.
Expected
    <nil>: nil
to equal
    <*errors.errorString | 0xc4203ac990>: {
        s: "timed out waiting for the condition",
    }
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:464

Issues about this test specifically: #36950

Failed: kubectl version {e2e.go}

exit status 1

Issues about this test specifically: #34378

Failed: [k8s.io] DisruptionController should create a PodDisruptionBudget {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:52
Expected error:
    <*errors.StatusError | 0xc421798280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #37017

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Expected error:
    <*errors.StatusError | 0xc42186c100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:222

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:312
Dec 16 16:39:22.186: Failed to query for cronJobs: the server could not find the requested resource
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:269

Issues about this test specifically: #37428

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.errorString | 0xc4203ac990>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:175

Issues about this test specifically: #32646

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
Expected error:
    <*errors.errorString | 0xc422817ce0>: {
        s: "expected \"mode of file \\\"/etc/secret-volume/data-1\\\": -r--r-----\" in container output: Expected\n    <string>: mode of file \"/etc/secret-volume/data-1\": grwxrwxrwx\n    content of file \"/etc/secret-volume/data-1\": value-1\n    \n    \nto contain substring\n    <string>: mode of file \"/etc/secret-volume/data-1\": -r--r-----",
    }
    expected "mode of file \"/etc/secret-volume/data-1\": -r--r-----" in container output: Expected
        <string>: mode of file "/etc/secret-volume/data-1": grwxrwxrwx
        content of file "/etc/secret-volume/data-1": value-1
        
        
    to contain substring
        <string>: mode of file "/etc/secret-volume/data-1": -r--r-----
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Expected error:
    <*errors.StatusError | 0xc42186c600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:149

Issues about this test specifically: #38254

Failed: list nodes {e2e.go}

exit status 1

Issues about this test specifically: #38667

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Expected error:
    <*errors.StatusError | 0xc422930080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:287

Issues about this test specifically: #38083

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Expected error:
    <*errors.StatusError | 0xc42196b300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:183

Issues about this test specifically: #38439

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.StatusError | 0xc421985000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc421ed7040>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: total pods available: 0, less than the min required: 3",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: total pods available: 0, less than the min required: 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Expected error:
    <*errors.StatusError | 0xc42167dd00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:407

Issues about this test specifically: #37774

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Pod was not deleted during network partition.
Expected
    <nil>: nil
to equal
    <*errors.errorString | 0xc4203ac990>: {
        s: "timed out waiting for the condition",
    }
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:347

Issues about this test specifically: #37479

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc421475660>: {
        s: "deployment \"nginx\" never updated with the desired condition and reason: []",
    }
    deployment "nginx" never updated with the desired condition and reason: []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1323

Issues about this test specifically: #31697 #36574

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.StatusError | 0xc420f99980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:194

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:501
Expected error:
    <*errors.StatusError | 0xc420877d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:452

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc42294fdd0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:25, Replicas:3, UpdatedReplicas:1, AvailableReplicas:3, UnavailableReplicas:0, Conditions:[]extensions.DeploymentCondition(nil)}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:25, Replicas:3, UpdatedReplicas:1, AvailableReplicas:3, UnavailableReplicas:0, Conditions:[]extensions.DeploymentCondition(nil)}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:69
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:60

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/68/

Multiple broken tests:

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
Expected error:
    <*errors.errorString | 0xc422a4ba00>: {
        s: "expected pod \"client-containers-29434e8f-c437-11e6-89cf-0242ac110002\" success: gave up waiting for pod 'client-containers-29434e8f-c437-11e6-89cf-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-29434e8f-c437-11e6-89cf-0242ac110002" success: gave up waiting for pod 'client-containers-29434e8f-c437-11e6-89cf-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29994

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c66440>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b5e9d0>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225651a0>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e05d10>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225b8d20>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225b9680>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422678b10>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224b7750>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
Expected error:
    <*errors.errorString | 0xc42320cdb0>: {
        s: "expected pod \"client-containers-484f1ba0-c43f-11e6-89cf-0242ac110002\" success: gave up waiting for pod 'client-containers-484f1ba0-c43f-11e6-89cf-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-484f1ba0-c43f-11e6-89cf-0242ac110002" success: gave up waiting for pod 'client-containers-484f1ba0-c43f-11e6-89cf-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36706

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42299f310>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227f5560>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:64
Expected error:
    <*errors.errorString | 0xc4229c0370>: {
        s: "expected pod \"client-containers-8dd99487-c439-11e6-89cf-0242ac110002\" success: gave up waiting for pod 'client-containers-8dd99487-c439-11e6-89cf-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-8dd99487-c439-11e6-89cf-0242ac110002" success: gave up waiting for pod 'client-containers-8dd99487-c439-11e6-89cf-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29467

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4237f0380>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422678420>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421766120>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e02ed0>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc420923130>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 154, 207, 205],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.154.207.205:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422c0f080>: {
        s: "Namespace e2e-tests-services-n5l64 is active",
    }
    Namespace e2e-tests-services-n5l64 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/70/

Multiple broken tests:

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc422682000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Dec 17 15:27:46.884: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37774

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Dec 17 19:18:07.823: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38254

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Dec 17 13:12:04.910: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Dec 17 18:10:08.448: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc421cd7b60>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 5, less than the min required: 6",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 5, less than the min required: 6
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc4218c6220>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Dec 17 18:43:52.177: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1253
Dec 17 15:58:30.773: Failed getting pod e2e-test-nginx-pod: Timeout while waiting for pods with labels "run=e2e-test-nginx-pod" to be running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1231

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:188
Expected error:
    <*errors.errorString | 0xc4203aacb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:169

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc422682010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:351
Expected error:
    <*errors.errorString | 0xc4203aacb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36649

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc422682010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:953

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc4232a8d10>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63617634522, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63617634522, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63617634638, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63617634638, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3837372172\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63617634522, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63617634522, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63617634638, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63617634638, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3837372172\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1335

Issues about this test specifically: #31697 #36574

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Dec 17 16:57:48.377: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:310
Expected error:
    <*errors.errorString | 0xc4203aacb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #35793

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc422576020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/80/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422542c40>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42353bb70>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422d698f0>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e65ca0>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423250040>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423146210>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224fced0>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42170bba0>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4239ecde0>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422d950c0>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc4209a0640>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 36, 127],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.36.127:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422d447d0>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225427b0>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421418240>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f28ac0>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225bf240>: {
        s: "Namespace e2e-tests-services-h6jvm is active",
    }
    Namespace e2e-tests-services-h6jvm is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/81/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423101c90>: {
        s: "Namespace e2e-tests-services-t46c3 is active",
    }
    Namespace e2e-tests-services-t46c3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224e24e0>: {
        s: "Namespace e2e-tests-services-t46c3 is active",
    }
    Namespace e2e-tests-services-t46c3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4230f27d0>: {
        s: "Namespace e2e-tests-services-t46c3 is active",
    }
    Namespace e2e-tests-services-t46c3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422165400>: {
        s: "Namespace e2e-tests-services-t46c3 is active",
    }
    Namespace e2e-tests-services-t46c3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc4227921e0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 130, 211, 157, 149],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 130.211.157.149:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:417

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222b7d60>: {
        s: "Namespace e2e-tests-services-t46c3 is active",
    }
    Namespace e2e-tests-services-t46c3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4238067f0>: {
        s: "Namespace e2e-tests-services-t46c3 is active",
    }
    Namespace e2e-tests-services-t46c3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Dec 21 10:50:45.610: Node gke-bootstrap-e2e-default-pool-5fa9d4ce-upyn did not become ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:291

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e822b0>: {
        s: "Namespace e2e-tests-services-t46c3 is active",
    }
    Namespace e2e-tests-services-t46c3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221c7430>: {
        s: "Namespace e2e-tests-services-t46c3 is active",
    }
    Namespace e2e-tests-services-t46c3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/82/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc422a68320>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 185, 233],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.198.185.233:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220604a0>: {
        s: "Namespace e2e-tests-services-b5c13 is active",
    }
    Namespace e2e-tests-services-b5c13 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422513160>: {
        s: "Namespace e2e-tests-services-b5c13 is active",
    }
    Namespace e2e-tests-services-b5c13 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4228ea660>: {
        s: "Namespace e2e-tests-services-b5c13 is active",
    }
    Namespace e2e-tests-services-b5c13 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422920480>: {
        s: "Namespace e2e-tests-services-b5c13 is active",
    }
    Namespace e2e-tests-services-b5c13 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422fa6430>: {
        s: "Namespace e2e-tests-services-b5c13 is active",
    }
    Namespace e2e-tests-services-b5c13 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4229341d0>: {
        s: "Namespace e2e-tests-services-b5c13 is active",
    }
    Namespace e2e-tests-services-b5c13 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422879ed0>: {
        s: "Namespace e2e-tests-services-b5c13 is active",
    }
    Namespace e2e-tests-services-b5c13 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d44890>: {
        s: "Namespace e2e-tests-services-b5c13 is active",
    }
    Namespace e2e-tests-services-b5c13 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/98/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422db9370>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221cb760>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218ddb20>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c92360>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422a97010>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421795d90>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213d1860>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422bdcd60>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d999b0>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42264f8a0>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227cd2c0>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423108340>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219ad140>: {
        s: "Namespace e2e-tests-services-wt96h is active",
    }
    Namespace e2e-tests-services-wt96h is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc4220ea230>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 197, 220, 213],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.197.220.213:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/109/
Multiple broken tests:

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Dec 30 07:09:22.947: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc421756040>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:310
Expected error:
    <*errors.errorString | 0xc4203acdb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #35793

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421434620>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Dec 30 12:47:49.255: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc422218010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc42181c010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Dec 30 15:29:04.283: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc42026ad30>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:24, Replicas:3, UpdatedReplicas:3, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618714352, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618714352, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618714375, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618714345, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"NewReplicaSetAvailable\", Message:\"Replica set \\\"nginx-4212393342\\\" has successfully progressed.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:24, Replicas:3, UpdatedReplicas:3, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618714352, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618714352, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618714375, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618714345, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"NewReplicaSetAvailable", Message:"Replica set \"nginx-4212393342\" has successfully progressed."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1253
Dec 30 07:43:10.456: Failed getting pod e2e-test-nginx-pod: Timeout while waiting for pods with labels "run=e2e-test-nginx-pod" to be running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1231

Issues about this test specifically: #29834 #35757

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/110/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a92c80>: {
        s: "Namespace e2e-tests-nettest-db0w7 is active",
    }
    Namespace e2e-tests-nettest-db0w7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42309f850>: {
        s: "Namespace e2e-tests-nettest-db0w7 is active",
    }
    Namespace e2e-tests-nettest-db0w7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc4203aeaa0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #36178

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422022680>: {
        s: "Namespace e2e-tests-nettest-db0w7 is active",
    }
    Namespace e2e-tests-nettest-db0w7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227082d0>: {
        s: "Namespace e2e-tests-nettest-db0w7 is active",
    }
    Namespace e2e-tests-nettest-db0w7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c611e0>: {
        s: "Namespace e2e-tests-nettest-db0w7 is active",
    }
    Namespace e2e-tests-nettest-db0w7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422a39a60>: {
        s: "Namespace e2e-tests-nettest-db0w7 is active",
    }
    Namespace e2e-tests-nettest-db0w7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ff8cc0>: {
        s: "Namespace e2e-tests-nettest-db0w7 is active",
    }
    Namespace e2e-tests-nettest-db0w7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4226d0530>: {
        s: "Namespace e2e-tests-nettest-db0w7 is active",
    }
    Namespace e2e-tests-nettest-db0w7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f377e0>: {
        s: "Namespace e2e-tests-nettest-db0w7 is active",
    }
    Namespace e2e-tests-nettest-db0w7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/111/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422912530>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc420c69d70>: {
        s: "error while stopping RC: service1: Get https://35.184.32.34/api/v1/namespaces/e2e-tests-services-rj37w/replicationcontrollers/service1: dial tcp 35.184.32.34:443: getsockopt: connection refused",
    }
    error while stopping RC: service1: Get https://35.184.32.34/api/v1/namespaces/e2e-tests-services-rj37w/replicationcontrollers/service1: dial tcp 35.184.32.34:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:417

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421642db0>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219a91a0>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e2ca00>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421612130>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42252ba50>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215548d0>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e31100>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4236ab3c0>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210a7830>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ab3f60>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42252b7e0>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421359c30>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42183a000>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4232ddde0>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422441d30>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216126b0>: {
        s: "Namespace e2e-tests-services-rj37w is active",
    }
    Namespace e2e-tests-services-rj37w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/118/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e48440>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42249a770>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc422e1f5c0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222c58e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42333a680>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42160ae00>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423051670>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ab24f0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421287250>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224320e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210eb6a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421df60a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42231c310>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4223d0060>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42192aa40>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42252fa20>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42251b7e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422714b20>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 10:02:27 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213f1300>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-465da75e-x1mx gke-bootstrap-e2e-default-pool-465da75e-x1mx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:06:51 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-02 06:24:45 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/122/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214e96c0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b17a80>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4226fb620>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422b13680>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 20:36:47 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 20:36:47 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4226fb720>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 20:36:47 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 20:36:47 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225142a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422501a70>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a28550>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4225c6d60>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42195a080>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b166d0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4226c2680>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422a59110>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42236e1d0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 20:36:47 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 20:36:47 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225e09c0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422035280>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422a58d60>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42200b800>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 20:36:47 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 20:36:47 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420268ed0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 gke-bootstrap-e2e-default-pool-610f5a2c-tjq6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:42:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 14:58:54 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/124/
Multiple broken tests:

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Jan  4 11:35:55.708: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221c59a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227c1250>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4213c6970>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224a77f0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421bdf6b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422984900>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42101a770>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220a1050>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217a4af0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42378e720>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4226da6a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e15c30>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422c0ede0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221c4400>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422203820>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42131cc30>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-x3220 gke-bootstrap-e2e-default-pool-3df3b176-t1sx Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-04 10:17:01 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/133/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222a4270>: {
        s: "Namespace e2e-tests-services-jnwb1 is active",
    }
    Namespace e2e-tests-services-jnwb1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422438060>: {
        s: "Namespace e2e-tests-services-jnwb1 is active",
    }
    Namespace e2e-tests-services-jnwb1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421eaf1d0>: {
        s: "Namespace e2e-tests-services-jnwb1 is active",
    }
    Namespace e2e-tests-services-jnwb1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42267c250>: {
        s: "Namespace e2e-tests-services-jnwb1 is active",
    }
    Namespace e2e-tests-services-jnwb1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4234696e0>: {
        s: "Namespace e2e-tests-services-jnwb1 is active",
    }
    Namespace e2e-tests-services-jnwb1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c04070>: {
        s: "Namespace e2e-tests-services-jnwb1 is active",
    }
    Namespace e2e-tests-services-jnwb1 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc4231db4a0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 54, 104],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.54.104:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:66
Expected error:
    <*errors.StatusError | 0xc42332cc00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \\\"gke-d28f33183f326e9b13a5\\\"?'\\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-82aeebea-1b74:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-d28f33183f326e9b13a5\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-82aeebea-1b74:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-d28f33183f326e9b13a5\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-82aeebea-1b74:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:325

Issues about this test specifically: #35422

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/134/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-b878e47a-k4j0\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-b878e47a-k4j0" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 23:54:55.741: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42117ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36564

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 18:09:48.842: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421745678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38516

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 20:55:52.667: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c7ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37914

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 23:34:44.075: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42126e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 20:05:00.355: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a5c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 23:58:13.374: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220c3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420309790>: {
        s: "4 / 13 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b878e47a-k4j0 gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]\nheapster-v1.2.0-2168613315-64vbv                                   gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]\nkube-dns-4101612645-tqbwj                                          gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b878e47a-k4j0            gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:48 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:49 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]\n",
    }
    4 / 13 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b878e47a-k4j0 gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]
    heapster-v1.2.0-2168613315-64vbv                                   gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]
    kube-dns-4101612645-tqbwj                                          gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b878e47a-k4j0            gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:48 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:49 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:23:58.151: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cae278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:03:09.933: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fd3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:54:56.774: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d46278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 23:51:38.266: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421dfec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203d0ef0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:34:32.817: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421dff678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 00:31:37.647: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42179f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31408

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 20:40:53.534: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212f4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 20:15:56.292: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422305678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38439

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 00:58:03.073: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421375678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:38:00.096: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421478278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 22:11:07.290: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b2a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 20:22:33.580: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42187ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 00:44:34.145: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201f3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 23:27:58.457: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cdcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:08:13.516: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420968278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:50:25.076: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219a2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:47:16.764: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201f3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 20:44:08.458: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 00:27:44.130: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42207e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37531

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:27:35.511: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c66c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203d0ef0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 00:54:12.928: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d46c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 23:37:59.560: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b02278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421a56310>: {
        s: "at least one node failed to be ready",
    }
    at least one node failed to be ready
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:80

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 18:13:03.828: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a44278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35579

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 00:24:31.131: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42117b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:47:38.868: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d9b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:21:12.895: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420cec278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29831

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 18:46:40.278: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420969678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 22:14:39.185: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a45678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 18:03:16.388: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421404c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:40:32.023: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421584c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 23:41:43.786: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fe8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 23:45:00.989: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 22:47:33.759: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d9ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 20:47:26.384: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bf0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 23:48:14.342: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218fb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 17:38:00.756: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a86c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 00:50:58.824: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217bac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420250c30>: {
        s: "4 / 13 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b878e47a-k4j0 gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]\nheapster-v1.2.0-2168613315-64vbv                                   gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]\nkube-dns-4101612645-tqbwj                                          gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b878e47a-k4j0            gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:48 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:49 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]\n",
    }
    4 / 13 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b878e47a-k4j0 gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]
    heapster-v1.2.0-2168613315-64vbv                                   gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]
    kube-dns-4101612645-tqbwj                                          gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b878e47a-k4j0            gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:48 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:49 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:17:55.133: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bf0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203d0ef0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421fd5330>: {
        s: "4 / 13 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b878e47a-k4j0 gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]\nheapster-v1.2.0-2168613315-64vbv                                   gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]\nkube-dns-4101612645-tqbwj                                          gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b878e47a-k4j0            gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:48 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:49 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]\n",
    }
    4 / 13 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b878e47a-k4j0 gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]
    heapster-v1.2.0-2168613315-64vbv                                   gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]
    kube-dns-4101612645-tqbwj                                          gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b878e47a-k4j0            gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:48 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:49 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 18:06:29.529: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f20278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:24:24.167: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420311678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:44:03.604: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a45678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 00:13:57.414: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421600278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:31:15.537: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421405678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:54:06.718: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c67678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 22:40:36.103: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a5c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 17:41:16.333: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422004c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 22:31:00.057: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421794c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4204bcd70>: {
        s: "4 / 13 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b878e47a-k4j0 gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]\nheapster-v1.2.0-2168613315-64vbv                                   gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]\nkube-dns-4101612645-tqbwj                                          gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b878e47a-k4j0            gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:48 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:49 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]\n",
    }
    4 / 13 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b878e47a-k4j0 gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]
    heapster-v1.2.0-2168613315-64vbv                                   gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:48 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]
    kube-dns-4101612645-tqbwj                                          gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:52 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:18:22 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b878e47a-k4j0            gke-bootstrap-e2e-default-pool-b878e47a-k4j0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:48 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:49 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 15:17:55 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 00:18:00.767: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ccac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 23:13:14.112: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421895678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:50:52.137: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421255678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:11:27.049: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421caec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 21:44:24.479: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:06:21.020: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cbec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:58:29.835: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42109e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:20:25.897: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210ff678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc4203d0ef0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33887

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 20:01:43.161: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42205a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32646

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 20:08:13.667: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cae278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30851

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 18:00:02.856: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42109ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 22:44:08.777: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421636c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38511

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 17:34:49.249: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422126278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 01:01:16.276: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fed678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 19:16:12.538: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cf3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27079

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc4203d0ef0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  7 20:19:15.567: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214a3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  8 00:10:38.510: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/135/
Multiple broken tests:

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc4203b0bc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36288 #36913

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Jan  8 02:12:01.171: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc4224db3e0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 15, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 15, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc4203b0bc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Jan  8 06:33:36.646: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/142/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc421b54280>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 36, 127],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.36.127:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:65
Expected error:
    <*errors.StatusError | 0xc421f71e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \\\"gke-2cebc7ed9956f7ce574d\\\"?'\\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-feb7c232-frgn:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-2cebc7ed9956f7ce574d\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-feb7c232-frgn:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-2cebc7ed9956f7ce574d\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-feb7c232-frgn:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:325

Issues about this test specifically: #35601

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4206481b0>: {
        s: "Namespace e2e-tests-services-pmwq0 is active",
    }
    Namespace e2e-tests-services-pmwq0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422a8bf20>: {
        s: "Namespace e2e-tests-services-pmwq0 is active",
    }
    Namespace e2e-tests-services-pmwq0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c3c140>: {
        s: "Namespace e2e-tests-services-pmwq0 is active",
    }
    Namespace e2e-tests-services-pmwq0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224b7350>: {
        s: "Namespace e2e-tests-services-pmwq0 is active",
    }
    Namespace e2e-tests-services-pmwq0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b979f0>: {
        s: "Namespace e2e-tests-services-pmwq0 is active",
    }
    Namespace e2e-tests-services-pmwq0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421db51f0>: {
        s: "Namespace e2e-tests-services-pmwq0 is active",
    }
    Namespace e2e-tests-services-pmwq0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421682c60>: {
        s: "Namespace e2e-tests-services-pmwq0 is active",
    }
    Namespace e2e-tests-services-pmwq0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4228db970>: {
        s: "Namespace e2e-tests-services-pmwq0 is active",
    }
    Namespace e2e-tests-services-pmwq0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423320fb0>: {
        s: "Namespace e2e-tests-services-pmwq0 is active",
    }
    Namespace e2e-tests-services-pmwq0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225b6bd0>: {
        s: "Namespace e2e-tests-services-pmwq0 is active",
    }
    Namespace e2e-tests-services-pmwq0 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/154/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4223023b0>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4226a18e0>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422ae6230>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:150
Expected error:
    <*errors.errorString | 0xc42126b400>: {
        s: "failed to get logs from pod-secrets-62c10fde-da6b-11e6-913f-0242ac110005 for secret-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-secrets-62c10fde-da6b-11e6-913f-0242ac110005)",
    }
    failed to get logs from pod-secrets-62c10fde-da6b-11e6-913f-0242ac110005 for secret-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-62c10fde-da6b-11e6-913f-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422349d90>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422350e60>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d409b0>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421965df0>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42233f160>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217e5e10>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423213310>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423113c70>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217e59f0>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423001630>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211f9000>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc420eb2c80>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 35, 124],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.35.124:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215bf450>: {
        s: "Namespace e2e-tests-services-xhgb7 is active",
    }
    Namespace e2e-tests-services-xhgb7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/168/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc4210968c0>: {
        s: "expected pod \"pod-c4e1e615-de10-11e6-9cac-0242ac11000b\" success: gave up waiting for pod 'pod-c4e1e615-de10-11e6-9cac-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-c4e1e615-de10-11e6-9cac-0242ac11000b" success: gave up waiting for pod 'pod-c4e1e615-de10-11e6-9cac-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37500

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc4215f2500>: {
        s: "expected pod \"pod-secrets-7f22f3bb-de11-11e6-9cac-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-7f22f3bb-de11-11e6-9cac-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-7f22f3bb-de11-11e6-9cac-0242ac11000b" success: gave up waiting for pod 'pod-secrets-7f22f3bb-de11-11e6-9cac-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc420388770>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/199/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42173d2d0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210b6760>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc4203d1070>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4226e10a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4231743e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4228081e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203d1070>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4232faac0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4230c5bc0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220a5560>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423dfd860>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-autoscaler-2715466192-vkxfz gke-bootstrap-e2e-default-pool-bb58a6a2-lq99 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-23 21:16:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-container_vm-1.5-upgrade-cluster-new/210/
Multiple broken tests:

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc42193c020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:351
Expected error:
    <*errors.errorString | 0xc42038cc30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36649

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc423498010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Jan 27 11:41:04.929: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38254

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc4224de010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:953

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc42038d4c0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:19, Replicas:9, UpdatedReplicas:4, AvailableReplicas:7, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621134915, nsec:0, loc:(*time.Location)(0x3cef280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621134915, nsec:0, loc:(*time.Location)(0x3cef280)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621134949, nsec:0, loc:(*time.Location)(0x3cef280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621134949, nsec:0, loc:(*time.Location)(0x3cef280)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-90035656\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:19, Replicas:9, UpdatedReplicas:4, AvailableReplicas:7, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621134915, nsec:0, loc:(*time.Location)(0x3cef280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621134915, nsec:0, loc:(*time.Location)(0x3cef280)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621134949, nsec:0, loc:(*time.Location)(0x3cef280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621134949, nsec:0, loc:(*time.Location)(0x3cef280)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-90035656\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Jan 27 10:11:17.412: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Jan 27 10:53:10.278: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38439

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc422234c60>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621150261, nsec:0, loc:(*time.Location)(0x3cef280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621150261, nsec:0, loc:(*time.Location)(0x3cef280)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621150369, nsec:0, loc:(*time.Location)(0x3cef280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621150369, nsec:0, loc:(*time.Location)(0x3cef280)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3837372172\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621150261, nsec:0, loc:(*time.Location)(0x3cef280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621150261, nsec:0, loc:(*time.Location)(0x3cef280)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621150369, nsec:0, loc:(*time.Location)(0x3cef280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621150369, nsec:0, loc:(*time.Location)(0x3cef280)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3837372172\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1335

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc423490210>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 27 08:46:08.436: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Jan 27 16:52:40.092: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:261
Expected error:
    <*errors.errorString | 0xc42038cc30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:202

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc42193c020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Jan 27 12:48:57.830: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

@fejta fejta closed this as completed Jan 30, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

2 participants