Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-container_vm-1.4-gci-1.5-upgrade-cluster-new: broken test run #37746

Closed
k8s-github-robot opened this issue Dec 1, 2016 · 4 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-gci-1.5-upgrade-cluster-new/155/

Multiple broken tests:

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28283

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Expected error:
    <*errors.errorString | 0xc4203c1780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:282

Issues about this test specifically: #37259

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.errorString | 0xc421310e50>: {
        s: "Only 1 pods started out of 5",
    }
    Only 1 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:359

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc4209a0110>: {
        s: "error running gcloud [container clusters --project=gke-up-c1-4-g1-5-up-clu-n --zone=us-central1-a upgrade jenkins-e2e --cluster-version=1.5.0-beta.2.2+f64c9f2d999ceb --quiet --image-type=gci]; got error exit status 1, stdout \"\", stderr \"Upgrading jenkins-e2e...\\n..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\\n name: u'operation-1480445427283-93d52b1b'\\n operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)\\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/769714595857/zones/us-central1-a/operations/operation-1480445427283-93d52b1b'\\n status: StatusValueValuesEnum(DONE, 3)\\n statusMessage: u'cloud-kubernetes::UNKNOWN: nodes \\\"gke-jenkins-e2e-default-pool-bade9222-c9mc\\\" not found\\\\ngoroutine 88725 [running]:\\\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc4312a2780, 0x3c, 0x1, 0x10)\\\\n\\\\tcloud/kubernetes/common/errors.go:627 +0x22f\\\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc432046d00, 0xc4325be870)\\\\n\\\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42bcfa200, 0x1, 0x1, 0x0, 0x1)\\\\n\\\\tcloud/kubernetes/common/errors.go:852 +0x12b\\\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42e34e100, 0xc4209d2300, 0xc4296fd560, 0x3, 0x4, 0x2, 0x4)\\\\n\\\\tcloud/kubernetes/common/call.go:130 +0x608\\\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42da43b00, 0x7f6632debf80, 0xc424db7f80, 0xc4229cd860, 0xc42eb0d800, 0xc431d13ab0, 0xc4209d2300, 0xc424676270, 0xc2, 0xc42f1ed940, ...)\\\\n\\\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42c70bc80, 0x7f6632debf80, 0xc424db7f80, 0xc4209d2300, 0x7f6632c2fed8, 0xc431d13a40, 0xc4229cd860, 0xc42eb0d800, 0xc431d13ab0, 0xc424676270, ...)\\\\n\\\\tcloud/kubernetes/server/deploy.go:1830 +0xdc\\\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42c70bc80, 0x7f6632debf80, 0xc424db7f80, 0xc4209d2300, 0x7f6632c2fed8, 0xc431d13a40, 0xc42a54f950, 0xc4229cd860, 0xc422dbea80, 0xc42f1ed940, ...)\\\\n\\\\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, 0x3, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1179 +0x3e5\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, 0x0, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1057 +0x108\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc400000002, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:943 +0x3d4\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0x2bc7fe0, 0xc42da42880, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0xc400000002, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1877 +0xca\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc43180ff80, 0xc42da42c60, 0x2bc7fe0, 0xc42da42880, 0xc42beabdf4, 0xc, 0xc400000002, 0xc42beaa710, 0xc42c7c8bd0, 0x7f6632debf80, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1869 +0x2fd\\\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\\\n\\\\tcloud/kubernetes/server/server.go:1871 +0xc44\\\\n'\\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/769714595857/zones/us-central1-a/clusters/jenkins-e2e/nodePools/default-pool'\\n zone: u'us-central1-a'>] finished with error: cloud-kubernetes::UNKNOWN: nodes \\\"gke-jenkins-e2e-default-pool-bade9222-c9mc\\\" not found\\ngoroutine 88725 [running]:\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc4312a2780, 0x3c, 0x1, 0x10)\\n\\tcloud/kubernetes/common/errors.go:627 +0x22f\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc432046d00, 0xc4325be870)\\n\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42bcfa200, 0x1, 0x1, 0x0, 0x1)\\n\\tcloud/kubernetes/common/errors.go:852 +0x12b\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42e34e100, 0xc4209d2300, 0xc4296fd560, 0x3, 0x4, 0x2, 0x4)\\n\\tcloud/kubernetes/common/call.go:130 +0x608\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42da43b00, 0x7f6632debf80, 0xc424db7f80, 0xc4229cd860, 0xc42eb0d800, 0xc431d13ab0, 0xc4209d2300, 0xc424676270, 0xc2, 0xc42f1ed940, ...)\\n\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42c70bc80, 0x7f6632debf80, 0xc424db7f80, 0xc4209d2300, 0x7f6632c2fed8, 0xc431d13a40, 0xc4229cd860, 0xc42eb0d800, 0xc431d13ab0, 0xc424676270, ...)\\n\\tcloud/kubernetes/server/deploy.go:1830 +0xdc\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42c70bc80, 0x7f6632debf80, 0xc424db7f80, 0xc4209d2300, 0x7f6632c2fed8, 0xc431d13a40, 0xc42a54f950, 0xc4229cd860, 0xc422dbea80, 0xc42f1ed940, ...)\\n\\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, 0x3, ...)\\n\\tcloud/kubernetes/server/server.go:1179 +0x3e5\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, 0x0, ...)\\n\\tcloud/kubernetes/server/server.go:1057 +0x108\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc400000002, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, ...)\\n\\tcloud/kubernetes/server/server.go:943 +0x3d4\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0x2bc7fe0, 0xc42da42880, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0xc400000002, ...)\\n\\tcloud/kubernetes/server/server.go:1877 +0xca\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc43180ff80, 0xc42da42c60, 0x2bc7fe0, 0xc42da42880, 0xc42beabdf4, 0xc, 0xc400000002, 0xc42beaa710, 0xc42c7c8bd0, 0x7f6632debf80, ...)\\n\\tcloud/kubernetes/server/server.go:1869 +0x2fd\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\n\\tcloud/kubernetes/server/server.go:1871 +0xc44\\n\\n\"",
    }
    error running gcloud [container clusters --project=gke-up-c1-4-g1-5-up-clu-n --zone=us-central1-a upgrade jenkins-e2e --cluster-version=1.5.0-beta.2.2+f64c9f2d999ceb --quiet --image-type=gci]; got error exit status 1, stdout "", stderr "Upgrading jenkins-e2e...\n..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1480445427283-93d52b1b'\n operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/769714595857/zones/us-central1-a/operations/operation-1480445427283-93d52b1b'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'cloud-kubernetes::UNKNOWN: nodes \"gke-jenkins-e2e-default-pool-bade9222-c9mc\" not found\\ngoroutine 88725 [running]:\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc4312a2780, 0x3c, 0x1, 0x10)\\n\\tcloud/kubernetes/common/errors.go:627 +0x22f\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc432046d00, 0xc4325be870)\\n\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42bcfa200, 0x1, 0x1, 0x0, 0x1)\\n\\tcloud/kubernetes/common/errors.go:852 +0x12b\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42e34e100, 0xc4209d2300, 0xc4296fd560, 0x3, 0x4, 0x2, 0x4)\\n\\tcloud/kubernetes/common/call.go:130 +0x608\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42da43b00, 0x7f6632debf80, 0xc424db7f80, 0xc4229cd860, 0xc42eb0d800, 0xc431d13ab0, 0xc4209d2300, 0xc424676270, 0xc2, 0xc42f1ed940, ...)\\n\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42c70bc80, 0x7f6632debf80, 0xc424db7f80, 0xc4209d2300, 0x7f6632c2fed8, 0xc431d13a40, 0xc4229cd860, 0xc42eb0d800, 0xc431d13ab0, 0xc424676270, ...)\\n\\tcloud/kubernetes/server/deploy.go:1830 +0xdc\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42c70bc80, 0x7f6632debf80, 0xc424db7f80, 0xc4209d2300, 0x7f6632c2fed8, 0xc431d13a40, 0xc42a54f950, 0xc4229cd860, 0xc422dbea80, 0xc42f1ed940, ...)\\n\\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, 0x3, ...)\\n\\tcloud/kubernetes/server/server.go:1179 +0x3e5\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, 0x0, ...)\\n\\tcloud/kubernetes/server/server.go:1057 +0x108\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc400000002, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, ...)\\n\\tcloud/kubernetes/server/server.go:943 +0x3d4\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0x2bc7fe0, 0xc42da42880, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0xc400000002, ...)\\n\\tcloud/kubernetes/server/server.go:1877 +0xca\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc43180ff80, 0xc42da42c60, 0x2bc7fe0, 0xc42da42880, 0xc42beabdf4, 0xc, 0xc400000002, 0xc42beaa710, 0xc42c7c8bd0, 0x7f6632debf80, ...)\\n\\tcloud/kubernetes/server/server.go:1869 +0x2fd\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\n\\tcloud/kubernetes/server/server.go:1871 +0xc44\\n'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/769714595857/zones/us-central1-a/clusters/jenkins-e2e/nodePools/default-pool'\n zone: u'us-central1-a'>] finished with error: cloud-kubernetes::UNKNOWN: nodes \"gke-jenkins-e2e-default-pool-bade9222-c9mc\" not found\ngoroutine 88725 [running]:\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc4312a2780, 0x3c, 0x1, 0x10)\n\tcloud/kubernetes/common/errors.go:627 +0x22f\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc432046d00, 0xc4325be870)\n\tcloud/kubernetes/common/errors.go:681 +0x1ac\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42bcfa200, 0x1, 0x1, 0x0, 0x1)\n\tcloud/kubernetes/common/errors.go:852 +0x12b\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42e34e100, 0xc4209d2300, 0xc4296fd560, 0x3, 0x4, 0x2, 0x4)\n\tcloud/kubernetes/common/call.go:130 +0x608\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42da43b00, 0x7f6632debf80, 0xc424db7f80, 0xc4229cd860, 0xc42eb0d800, 0xc431d13ab0, 0xc4209d2300, 0xc424676270, 0xc2, 0xc42f1ed940, ...)\n\tcloud/kubernetes/server/updater/updater.go:70 +0x693\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42c70bc80, 0x7f6632debf80, 0xc424db7f80, 0xc4209d2300, 0x7f6632c2fed8, 0xc431d13a40, 0xc4229cd860, 0xc42eb0d800, 0xc431d13ab0, 0xc424676270, ...)\n\tcloud/kubernetes/server/deploy.go:1830 +0xdc\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42c70bc80, 0x7f6632debf80, 0xc424db7f80, 0xc4209d2300, 0x7f6632c2fed8, 0xc431d13a40, 0xc42a54f950, 0xc4229cd860, 0xc422dbea80, 0xc42f1ed940, ...)\n\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, 0x3, ...)\n\tcloud/kubernetes/server/server.go:1179 +0x3e5\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, 0x0, ...)\n\tcloud/kubernetes/server/server.go:1057 +0x108\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0xc400000002, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0x2bc7fe0, 0xc42da42880, ...)\n\tcloud/kubernetes/server/server.go:943 +0x3d4\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42c7c8bd0, 0x7f6632debf80, 0xc424db7950, 0xc4209d2300, 0x2bc7fe0, 0xc42da42880, 0xc4229cd860, 0xc422dbea80, 0xc420a5ae40, 0xc400000002, ...)\n\tcloud/kubernetes/server/server.go:1877 +0xca\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc43180ff80, 0xc42da42c60, 0x2bc7fe0, 0xc42da42880, 0xc42beabdf4, 0xc, 0xc400000002, 0xc42beaa710, 0xc42c7c8bd0, 0x7f6632debf80, ...)\n\tcloud/kubernetes/server/server.go:1869 +0x2fd\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\n\tcloud/kubernetes/server/server.go:1871 +0xc44\n\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:93

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #36178

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #33285

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #34064

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc4228dc7b0>: {
        s: "Only 1 pods started out of 5",
    }
    Only 1 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #33887

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #34317

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #34250

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc4220c94e0>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #32684 #36278

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28010 #28427 #33997

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:403
Expected error:
    <*errors.errorString | 0xc421310e70>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 1 (20.008500906s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 1 (20.008500906s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:398

Issues about this test specifically: #37373

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28297 #37101

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Nov 29 14:56:04.083: Memory usage exceeding limits:
 node gke-jenkins-e2e-default-pool-bade9222-im6b:
 container "runtime": expected RSS memory (MB) < 314572800; got 315088896
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:154

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc422588390>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 1 (20.060403354s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 1 (20.060403354s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:80

Issues about this test specifically: #26744 #26929

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.errorString | 0xc4222e9250>: {
        s: "Only 1 pods started out of 5",
    }
    Only 1 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:352

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc421470030>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:308

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
At least two nodes necessary with an external or LegacyHostIP
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:476

Issues about this test specifically: #36271

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:564
Expected error:
    <*errors.errorString | 0xc42329a960>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:489

Issues about this test specifically: #30078 #30142

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 1, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-gci-1.5-upgrade-cluster-new/159/

Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc42377a550>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 197, 194, 195],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.197.194.195:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42172d710>: {
        s: "Namespace e2e-tests-services-j9t9p is active",
    }
    Namespace e2e-tests-services-j9t9p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] StatefulSet [Slow] [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:112
Expected error:
    <*errors.errorString | 0xc4233a0780>: {
        s: "Timeout waiting for pv provisioner to delete pvs, this might mean the test leaked pvs.",
    }
    Timeout waiting for pv provisioner to delete pvs, this might mean the test leaked pvs.
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1072

Issues about this test specifically: #37517

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423241b60>: {
        s: "Namespace e2e-tests-services-j9t9p is active",
    }
    Namespace e2e-tests-services-j9t9p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4223b2b30>: {
        s: "Namespace e2e-tests-services-j9t9p is active",
    }
    Namespace e2e-tests-services-j9t9p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

@k8s-github-robot
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants