Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new: broken test run #37749

Closed
k8s-github-robot opened this issue Dec 1, 2016 · 1 comment
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/422/

Multiple broken tests:

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.errorString | 0xc421ad89e0>: {
        s: "Only 1 pods started out of 5",
    }
    Only 1 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:359

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc420a720f0>: {
        s: "error running gcloud [container clusters --project=gke-up-g1-3-g1-5-up-clu-n --zone=us-central1-a upgrade jenkins-e2e --cluster-version=1.5.0-beta.2.2+f64c9f2d999ceb --quiet --image-type=gci]; got error exit status 1, stdout \"\", stderr \"Upgrading jenkins-e2e...\\n................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\\n name: u'operation-1480437667549-ea2706f0'\\n operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)\\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/61807208001/zones/us-central1-a/operations/operation-1480437667549-ea2706f0'\\n status: StatusValueValuesEnum(DONE, 3)\\n statusMessage: u'cloud-kubernetes::UNKNOWN: nodes \\\"gke-jenkins-e2e-default-pool-e8b18007-d3zq\\\" not found\\\\ngoroutine 421583 [running]:\\\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc42910d1c0, 0x3c, 0x1, 0x10)\\\\n\\\\tcloud/kubernetes/common/errors.go:627 +0x22f\\\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc423b60300, 0xc429b9eff0)\\\\n\\\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42aa0b550, 0x1, 0x1, 0x0, 0x1)\\\\n\\\\tcloud/kubernetes/common/errors.go:852 +0x12b\\\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42c28b500, 0xc42eb196e0, 0xc430d0a240, 0x3, 0x4, 0x2, 0x4)\\\\n\\\\tcloud/kubernetes/common/call.go:130 +0x608\\\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc430df7300, 0x7f0923688610, 0xc42354aba0, 0xc4244c9040, 0xc4300e9500, 0xc4261a3f80, 0xc42eb196e0, 0xc422dd89c0, 0xc2, 0xc42d7a4ec0, ...)\\\\n\\\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42a020900, 0x7f0923688610, 0xc42354aba0, 0xc42eb196e0, 0x7f09234a0c78, 0xc4261a3f10, 0xc4244c9040, 0xc4300e9500, 0xc4261a3f80, 0xc422dd89c0, ...)\\\\n\\\\tcloud/kubernetes/server/deploy.go:1830 +0xdc\\\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42a020900, 0x7f0923688610, 0xc42354aba0, 0xc42eb196e0, 0x7f09234a0c78, 0xc4261a3f10, 0xc4297efa70, 0xc4244c9040, 0xc423b58680, 0xc42d7a4ec0, ...)\\\\n\\\\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, 0x3, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1179 +0x3e5\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, 0x0, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1057 +0x108\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc400000002, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:943 +0x3d4\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0x2bc7fe0, 0xc430de81c0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0xc400000002, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1877 +0xca\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42eb19620, 0xc430de82a0, 0x2bc7fe0, 0xc430de81c0, 0xc42cb4d3e4, 0xc, 0xc400000002, 0xc42cb4c640, 0xc42013f650, 0x7f0923688610, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1869 +0x2fd\\\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\\\n\\\\tcloud/kubernetes/server/server.go:1871 +0xc44\\\\n'\\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/61807208001/zones/us-central1-a/clusters/jenkins-e2e/nodePools/default-pool'\\n zone: u'us-central1-a'>] finished with error: cloud-kubernetes::UNKNOWN: nodes \\\"gke-jenkins-e2e-default-pool-e8b18007-d3zq\\\" not found\\ngoroutine 421583 [running]:\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc42910d1c0, 0x3c, 0x1, 0x10)\\n\\tcloud/kubernetes/common/errors.go:627 +0x22f\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc423b60300, 0xc429b9eff0)\\n\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42aa0b550, 0x1, 0x1, 0x0, 0x1)\\n\\tcloud/kubernetes/common/errors.go:852 +0x12b\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42c28b500, 0xc42eb196e0, 0xc430d0a240, 0x3, 0x4, 0x2, 0x4)\\n\\tcloud/kubernetes/common/call.go:130 +0x608\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc430df7300, 0x7f0923688610, 0xc42354aba0, 0xc4244c9040, 0xc4300e9500, 0xc4261a3f80, 0xc42eb196e0, 0xc422dd89c0, 0xc2, 0xc42d7a4ec0, ...)\\n\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42a020900, 0x7f0923688610, 0xc42354aba0, 0xc42eb196e0, 0x7f09234a0c78, 0xc4261a3f10, 0xc4244c9040, 0xc4300e9500, 0xc4261a3f80, 0xc422dd89c0, ...)\\n\\tcloud/kubernetes/server/deploy.go:1830 +0xdc\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42a020900, 0x7f0923688610, 0xc42354aba0, 0xc42eb196e0, 0x7f09234a0c78, 0xc4261a3f10, 0xc4297efa70, 0xc4244c9040, 0xc423b58680, 0xc42d7a4ec0, ...)\\n\\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, 0x3, ...)\\n\\tcloud/kubernetes/server/server.go:1179 +0x3e5\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, 0x0, ...)\\n\\tcloud/kubernetes/server/server.go:1057 +0x108\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc400000002, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, ...)\\n\\tcloud/kubernetes/server/server.go:943 +0x3d4\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0x2bc7fe0, 0xc430de81c0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0xc400000002, ...)\\n\\tcloud/kubernetes/server/server.go:1877 +0xca\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42eb19620, 0xc430de82a0, 0x2bc7fe0, 0xc430de81c0, 0xc42cb4d3e4, 0xc, 0xc400000002, 0xc42cb4c640, 0xc42013f650, 0x7f0923688610, ...)\\n\\tcloud/kubernetes/server/server.go:1869 +0x2fd\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\n\\tcloud/kubernetes/server/server.go:1871 +0xc44\\n\\n\"",
    }
    error running gcloud [container clusters --project=gke-up-g1-3-g1-5-up-clu-n --zone=us-central1-a upgrade jenkins-e2e --cluster-version=1.5.0-beta.2.2+f64c9f2d999ceb --quiet --image-type=gci]; got error exit status 1, stdout "", stderr "Upgrading jenkins-e2e...\n................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1480437667549-ea2706f0'\n operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/61807208001/zones/us-central1-a/operations/operation-1480437667549-ea2706f0'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'cloud-kubernetes::UNKNOWN: nodes \"gke-jenkins-e2e-default-pool-e8b18007-d3zq\" not found\\ngoroutine 421583 [running]:\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc42910d1c0, 0x3c, 0x1, 0x10)\\n\\tcloud/kubernetes/common/errors.go:627 +0x22f\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc423b60300, 0xc429b9eff0)\\n\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42aa0b550, 0x1, 0x1, 0x0, 0x1)\\n\\tcloud/kubernetes/common/errors.go:852 +0x12b\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42c28b500, 0xc42eb196e0, 0xc430d0a240, 0x3, 0x4, 0x2, 0x4)\\n\\tcloud/kubernetes/common/call.go:130 +0x608\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc430df7300, 0x7f0923688610, 0xc42354aba0, 0xc4244c9040, 0xc4300e9500, 0xc4261a3f80, 0xc42eb196e0, 0xc422dd89c0, 0xc2, 0xc42d7a4ec0, ...)\\n\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42a020900, 0x7f0923688610, 0xc42354aba0, 0xc42eb196e0, 0x7f09234a0c78, 0xc4261a3f10, 0xc4244c9040, 0xc4300e9500, 0xc4261a3f80, 0xc422dd89c0, ...)\\n\\tcloud/kubernetes/server/deploy.go:1830 +0xdc\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42a020900, 0x7f0923688610, 0xc42354aba0, 0xc42eb196e0, 0x7f09234a0c78, 0xc4261a3f10, 0xc4297efa70, 0xc4244c9040, 0xc423b58680, 0xc42d7a4ec0, ...)\\n\\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, 0x3, ...)\\n\\tcloud/kubernetes/server/server.go:1179 +0x3e5\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, 0x0, ...)\\n\\tcloud/kubernetes/server/server.go:1057 +0x108\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc400000002, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, ...)\\n\\tcloud/kubernetes/server/server.go:943 +0x3d4\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0x2bc7fe0, 0xc430de81c0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0xc400000002, ...)\\n\\tcloud/kubernetes/server/server.go:1877 +0xca\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42eb19620, 0xc430de82a0, 0x2bc7fe0, 0xc430de81c0, 0xc42cb4d3e4, 0xc, 0xc400000002, 0xc42cb4c640, 0xc42013f650, 0x7f0923688610, ...)\\n\\tcloud/kubernetes/server/server.go:1869 +0x2fd\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\n\\tcloud/kubernetes/server/server.go:1871 +0xc44\\n'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/61807208001/zones/us-central1-a/clusters/jenkins-e2e/nodePools/default-pool'\n zone: u'us-central1-a'>] finished with error: cloud-kubernetes::UNKNOWN: nodes \"gke-jenkins-e2e-default-pool-e8b18007-d3zq\" not found\ngoroutine 421583 [running]:\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc42910d1c0, 0x3c, 0x1, 0x10)\n\tcloud/kubernetes/common/errors.go:627 +0x22f\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc423b60300, 0xc429b9eff0)\n\tcloud/kubernetes/common/errors.go:681 +0x1ac\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42aa0b550, 0x1, 0x1, 0x0, 0x1)\n\tcloud/kubernetes/common/errors.go:852 +0x12b\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42c28b500, 0xc42eb196e0, 0xc430d0a240, 0x3, 0x4, 0x2, 0x4)\n\tcloud/kubernetes/common/call.go:130 +0x608\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc430df7300, 0x7f0923688610, 0xc42354aba0, 0xc4244c9040, 0xc4300e9500, 0xc4261a3f80, 0xc42eb196e0, 0xc422dd89c0, 0xc2, 0xc42d7a4ec0, ...)\n\tcloud/kubernetes/server/updater/updater.go:70 +0x693\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42a020900, 0x7f0923688610, 0xc42354aba0, 0xc42eb196e0, 0x7f09234a0c78, 0xc4261a3f10, 0xc4244c9040, 0xc4300e9500, 0xc4261a3f80, 0xc422dd89c0, ...)\n\tcloud/kubernetes/server/deploy.go:1830 +0xdc\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42a020900, 0x7f0923688610, 0xc42354aba0, 0xc42eb196e0, 0x7f09234a0c78, 0xc4261a3f10, 0xc4297efa70, 0xc4244c9040, 0xc423b58680, 0xc42d7a4ec0, ...)\n\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, 0x3, ...)\n\tcloud/kubernetes/server/server.go:1179 +0x3e5\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, 0x0, ...)\n\tcloud/kubernetes/server/server.go:1057 +0x108\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0xc400000002, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0x2bc7fe0, 0xc430de81c0, ...)\n\tcloud/kubernetes/server/server.go:943 +0x3d4\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42013f650, 0x7f0923688610, 0xc42d30ddd0, 0xc42eb196e0, 0x2bc7fe0, 0xc430de81c0, 0xc4244c9040, 0xc423b58680, 0xc425fd7320, 0xc400000002, ...)\n\tcloud/kubernetes/server/server.go:1877 +0xca\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42eb19620, 0xc430de82a0, 0x2bc7fe0, 0xc430de81c0, 0xc42cb4d3e4, 0xc, 0xc400000002, 0xc42cb4c640, 0xc42013f650, 0x7f0923688610, ...)\n\tcloud/kubernetes/server/server.go:1869 +0x2fd\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\n\tcloud/kubernetes/server/server.go:1871 +0xc44\n\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:93

Failed: [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:232
Expected
    <*api.Event | 0x0>: nil
not to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:230

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #32684 #36278

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203ac810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #33285

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:181
Expected error:
    <*errors.errorString | 0xc421909fd0>: {
        s: "expected \"[1-9]\" in container output: Expected\n    <string>: content of file \"/etc/cpu_limit\": 0\n    \nto match regular expression\n    <string>: [1-9]",
    }
    expected "[1-9]" in container output: Expected
        <string>: content of file "/etc/cpu_limit": 0
        
    to match regular expression
        <string>: [1-9]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #36178

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #34104

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:403
Expected error:
    <*errors.errorString | 0xc420309b80>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 1 (20.006912668s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 1 (20.006912668s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:398

Issues about this test specifically: #37373

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc4227eca40>: {
        s: "Only 1 pods started out of 5",
    }
    Only 1 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28297 #37101

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:240
Expected success, but got an error:
    <*errors.errorString | 0xc4203ac810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:232

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:158

Issues about this test specifically: #31873

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203ac810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234
Nov 29 10:52:50.222: pod e2e-tests-container-probe-o0og0/liveness-http - expected number of restarts: 0, found restarts: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:403

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc422655bc0>: {
        s: "expected \"mode of file \\\"/etc/podname\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/podname\": -rw-r--r--\n    \nto contain substring\n    <string>: mode of file \"/etc/podname\": -r--------",
    }
    expected "mode of file \"/etc/podname\": -r--------" in container output: Expected
        <string>: mode of file "/etc/podname": -rw-r--r--
        
    to contain substring
        <string>: mode of file "/etc/podname": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37423

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:42
Expected error:
    <*errors.errorString | 0xc4205ce330>: {
        s: "expected \"mode of file \\\"/etc/configmap-volume/data-1\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/configmap-volume/data-1\": -rw-r--r--\n    content of file \"/etc/configmap-volume/data-1\": value-1\n    \nto contain substring\n    <string>: mode of file \"/etc/configmap-volume/data-1\": -r--------",
    }
    expected "mode of file \"/etc/configmap-volume/data-1\": -r--------" in container output: Expected
        <string>: mode of file "/etc/configmap-volume/data-1": -rw-r--r--
        content of file "/etc/configmap-volume/data-1": value-1
        
    to contain substring
        <string>: mode of file "/etc/configmap-volume/data-1": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34827

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <*errors.errorString | 0xc4217b2490>: {
        s: "Only 99 pods started out of 100",
    }
    Only 99 pods started out of 100
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:78

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #33887

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.errorString | 0xc4227f6ea0>: {
        s: "Only 1 pods started out of 5",
    }
    Only 1 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:352

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc4218a2030>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Sysctls should support unsafe sysctls which are actually whitelisted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:163
Expected
    <string>: kernel.shm_rmid_forced = 0
    
to contain substring
    <string>: kernel.shm_rmid_forced = 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:162

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:808
Nov 29 10:22:04.653: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2495

Issues about this test specifically: #26134

Failed: [k8s.io] Sysctls should support sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:122
Expected
    <string>: kernel.shm_rmid_forced = 0
    
to contain substring
    <string>: kernel.shm_rmid_forced = 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:121

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #34064

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc42150dd60>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 1 (20.005770792s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 1 (20.005770792s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:80

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #36271

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:58
Expected error:
    <*errors.errorString | 0xc422390590>: {
        s: "expected \"mode of file \\\"/etc/podname\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/podname\": -rw-r--r--\n    \nto contain substring\n    <string>: mode of file \"/etc/podname\": -r--------",
    }
    expected "mode of file \"/etc/podname\": -r--------" in container output: Expected
        <string>: mode of file "/etc/podname": -rw-r--r--
        
    to contain substring
        <string>: mode of file "/etc/podname": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36300

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:69
Expected error:
    <*errors.errorString | 0xc421aea180>: {
        s: "Error while waiting for Deployment kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for Deployment kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:188
Expected error:
    <*errors.errorString | 0xc420268320>: {
        s: "expected \"[1-9]\" in container output: Expected\n    <string>: content of file \"/etc/memory_limit\": 0\n    \nto match regular expression\n    <string>: [1-9]",
    }
    expected "[1-9]" in container output: Expected
        <string>: content of file "/etc/memory_limit": 0
        
    to match regular expression
        <string>: [1-9]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37531

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc42165cdc0>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:308

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #34317

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203ac810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:393
Expected error:
    <*errors.errorString | 0xc4203ac810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36970

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:375
Expected
    <*errors.errorString | 0xc4203ac810>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:363

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] StatefulSet [Slow] [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:276
Nov 29 13:17:59.387: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:929

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Expected error:
    <*errors.errorString | 0xc4203ac810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:282

Issues about this test specifically: #37259

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:493
Expected
    <int>: 1
to equal
    <int>: 42
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:463

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:59
Expected error:
    <*errors.errorString | 0xc422709c90>: {
        s: "expected \"mode of file \\\"/etc/configmap-volume/path/to/data-2\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/configmap-volume/path/to/data-2\": -rw-r--r--\n    content of file \"/etc/configmap-volume/path/to/data-2\": value-2\n    \nto contain substring\n    <string>: mode of file \"/etc/configmap-volume/path/to/data-2\": -r--------",
    }
    expected "mode of file \"/etc/configmap-volume/path/to/data-2\": -r--------" in container output: Expected
        <string>: mode of file "/etc/configmap-volume/path/to/data-2": -rw-r--r--
        content of file "/etc/configmap-volume/path/to/data-2": value-2
        
    to contain substring
        <string>: mode of file "/etc/configmap-volume/path/to/data-2": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35790

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28283

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28010 #28427 #33997

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203ac810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:49
Expected error:
    <*errors.errorString | 0xc420332e70>: {
        s: "expected \"mode of file \\\"/etc/secret-volume/new-path-data-1\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/secret-volume/new-path-data-1\": -rw-r--r--\n    content of file \"/etc/secret-volume/new-path-data-1\": value-1\n    \n    \nto contain substring\n    <string>: mode of file \"/etc/secret-volume/new-path-data-1\": -r--------",
    }
    expected "mode of file \"/etc/secret-volume/new-path-data-1\": -r--------" in container output: Expected
        <string>: mode of file "/etc/secret-volume/new-path-data-1": -rw-r--r--
        content of file "/etc/secret-volume/new-path-data-1": value-1
        
        
    to contain substring
        <string>: mode of file "/etc/secret-volume/new-path-data-1": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37529

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420734050>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 1, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/421/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Nov 29 03:47:53.606: Couldn't restore the original cluster size: timeout waiting 10m0s for cluster size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:282

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42039edc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421d28e30>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.009168753s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.009168753s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:80

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:403
Expected error:
    <*errors.errorString | 0xc42356baf0>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.006718986s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.006718986s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:398

Issues about this test specifically: #37373

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc420b06e20>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421556030>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

2 participants