Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new: broken test run #37733

Closed
k8s-github-robot opened this issue Dec 1, 2016 · 2 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/417/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a0f230>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e5bda0>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422b953d0>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423224150>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:40
Expected error:
    <*errors.errorString | 0xc420f988a0>: {
        s: "expected \"mode of file \\\"/etc/secret-volume/data-1\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/secret-volume/data-1\": -rw-r--r--\n    content of file \"/etc/secret-volume/data-1\": value-1\n    \n    \nto contain substring\n    <string>: mode of file \"/etc/secret-volume/data-1\": -r--------",
    }
    expected "mode of file \"/etc/secret-volume/data-1\": -r--------" in container output: Expected
        <string>: mode of file "/etc/secret-volume/data-1": -rw-r--r--
        content of file "/etc/secret-volume/data-1": value-1
        
        
    to contain substring
        <string>: mode of file "/etc/secret-volume/data-1": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35256

Failed: DiffResources {e2e.go}

Error: 5 leaked resources
+k8s-fw-ab9674b7ab69f11e6a67242010af0002  jenkins-e2e  0.0.0.0/0     tcp:80                                  gke-jenkins-e2e-673e0a39-node
+NAME                              REGION       IP_ADDRESS       IP_PROTOCOL  TARGET
+ab9674b7ab69f11e6a67242010af0002  us-central1  104.198.150.208  TCP          us-central1/targetPools/ab9674b7ab69f11e6a67242010af0002
+NAME                              REGION       SESSION_AFFINITY  BACKUP  HEALTH_CHECKS
+ab9674b7ab69f11e6a67242010af0002  us-central1

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a38880>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422845b00>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a51950>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4232245c0>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422be5fd0>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420971790>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420350c70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:158

Issues about this test specifically: #31873

Failed: [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:232
Expected
    <*api.Event | 0x0>: nil
not to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:230

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:271
Expected
    <*errors.errorString | 0xc420350c70>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:260

Issues about this test specifically: #31408

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42154aa20>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a07980>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:49
Expected error:
    <*errors.errorString | 0xc420c29820>: {
        s: "expected \"mode of file \\\"/etc/secret-volume/new-path-data-1\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/secret-volume/new-path-data-1\": -rw-r--r--\n    content of file \"/etc/secret-volume/new-path-data-1\": value-1\n    \n    \nto contain substring\n    <string>: mode of file \"/etc/secret-volume/new-path-data-1\": -r--------",
    }
    expected "mode of file \"/etc/secret-volume/new-path-data-1\": -r--------" in container output: Expected
        <string>: mode of file "/etc/secret-volume/new-path-data-1": -rw-r--r--
        content of file "/etc/secret-volume/new-path-data-1": value-1
        
        
    to contain substring
        <string>: mode of file "/etc/secret-volume/new-path-data-1": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37529

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421003b80>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:493
Expected
    <int>: 1
to equal
    <int>: 42
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:463

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc421a2eac0>: {
        s: "expected \"mode of file \\\"/etc/podname\\\": -r--------\" in container output: Expected\n    <string>: mode of file \"/etc/podname\": -rw-r--r--\n    \nto contain substring\n    <string>: mode of file \"/etc/podname\": -r--------",
    }
    expected "mode of file \"/etc/podname\": -r--------" in container output: Expected
        <string>: mode of file "/etc/podname": -rw-r--r--
        
    to contain substring
        <string>: mode of file "/etc/podname": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37423

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4229f22e0>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214ab400>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:181
Expected error:
    <*errors.errorString | 0xc42151caf0>: {
        s: "expected \"[1-9]\" in container output: Expected\n    <string>: content of file \"/etc/cpu_limit\": 0\n    \nto match regular expression\n    <string>: [1-9]",
    }
    expected "[1-9]" in container output: Expected
        <string>: content of file "/etc/cpu_limit": 0
        
    to match regular expression
        <string>: [1-9]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42105a220>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc420aac200>: {
        s: "error running gcloud [container clusters --project=gke-up-c1-3-g1-5-up-clu-n --zone=us-central1-a upgrade jenkins-e2e --cluster-version=1.5.0-beta.2.2+f64c9f2d999ceb --quiet --image-type=gci]; got error exit status 1, stdout \"\", stderr \"Upgrading jenkins-e2e...\\n..........done.\\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\\n name: u'operation-1480471083550-ca245de4'\\n operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)\\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/707337012235/zones/us-central1-a/operations/operation-1480471083550-ca245de4'\\n status: StatusValueValuesEnum(DONE, 3)\\n statusMessage: u'cloud-kubernetes::UNKNOWN: client: etcd cluster is unavailable or misconfigured\\\\ngoroutine 1709333 [running]:\\\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc42b49d100, 0x34, 0x1, 0x10)\\\\n\\\\tcloud/kubernetes/common/errors.go:627 +0x22f\\\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc428e2c980, 0xc42fb79c20)\\\\n\\\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42afacb80, 0x1, 0x1, 0x0, 0x1)\\\\n\\\\tcloud/kubernetes/common/errors.go:852 +0x12b\\\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42b438800, 0xc42ac0fc80, 0xc42dc0e340, 0x3, 0x4, 0x2, 0x4)\\\\n\\\\tcloud/kubernetes/common/call.go:130 +0x608\\\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42a62c040, 0x7fe929f86430, 0xc4230b0ab0, 0xc429a9b1e0, 0xc42abf4280, 0xc42de16a10, 0xc42ac0fc80, 0xc428d56410, 0xc3, 0xc42c396040, ...)\\\\n\\\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42af7e480, 0x7fe929f86430, 0xc4230b0ab0, 0xc42ac0fc80, 0x7fe92196d930, 0xc42de169a0, 0xc429a9b1e0, 0xc42abf4280, 0xc42de16a10, 0xc428d56410, ...)\\\\n\\\\tcloud/kubernetes/server/deploy.go:1830 +0xdc\\\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42af7e480, 0x7fe929f86430, 0xc4230b0ab0, 0xc42ac0fc80, 0x7fe92196d930, 0xc42de169a0, 0xc421fa2f60, 0xc429a9b1e0, 0xc42abf4100, 0xc42c396040, ...)\\\\n\\\\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, 0x3, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1179 +0x3e5\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, 0x0, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1057 +0x108\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc400000002, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:943 +0x3d4\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0x2bc7fe0, 0xc42a8ccee0, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0xc400000002, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1877 +0xca\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42ac0fb60, 0xc42a8ccf60, 0x2bc7fe0, 0xc42a8ccee0, 0xc42b6dd714, 0xc, 0xc400000002, 0xc42b719b80, 0xc42b03e070, 0x7fe929f86430, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1869 +0x2fd\\\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\\\n\\\\tcloud/kubernetes/server/server.go:1871 +0xc44\\\\n'\\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/707337012235/zones/us-central1-a/clusters/jenkins-e2e/nodePools/default-pool'\\n zone: u'us-central1-a'>] finished with error: cloud-kubernetes::UNKNOWN: client: etcd cluster is unavailable or misconfigured\\ngoroutine 1709333 [running]:\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc42b49d100, 0x34, 0x1, 0x10)\\n\\tcloud/kubernetes/common/errors.go:627 +0x22f\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc428e2c980, 0xc42fb79c20)\\n\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42afacb80, 0x1, 0x1, 0x0, 0x1)\\n\\tcloud/kubernetes/common/errors.go:852 +0x12b\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42b438800, 0xc42ac0fc80, 0xc42dc0e340, 0x3, 0x4, 0x2, 0x4)\\n\\tcloud/kubernetes/common/call.go:130 +0x608\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42a62c040, 0x7fe929f86430, 0xc4230b0ab0, 0xc429a9b1e0, 0xc42abf4280, 0xc42de16a10, 0xc42ac0fc80, 0xc428d56410, 0xc3, 0xc42c396040, ...)\\n\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42af7e480, 0x7fe929f86430, 0xc4230b0ab0, 0xc42ac0fc80, 0x7fe92196d930, 0xc42de169a0, 0xc429a9b1e0, 0xc42abf4280, 0xc42de16a10, 0xc428d56410, ...)\\n\\tcloud/kubernetes/server/deploy.go:1830 +0xdc\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42af7e480, 0x7fe929f86430, 0xc4230b0ab0, 0xc42ac0fc80, 0x7fe92196d930, 0xc42de169a0, 0xc421fa2f60, 0xc429a9b1e0, 0xc42abf4100, 0xc42c396040, ...)\\n\\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, 0x3, ...)\\n\\tcloud/kubernetes/server/server.go:1179 +0x3e5\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, 0x0, ...)\\n\\tcloud/kubernetes/server/server.go:1057 +0x108\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc400000002, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, ...)\\n\\tcloud/kubernetes/server/server.go:943 +0x3d4\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0x2bc7fe0, 0xc42a8ccee0, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0xc400000002, ...)\\n\\tcloud/kubernetes/server/server.go:1877 +0xca\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42ac0fb60, 0xc42a8ccf60, 0x2bc7fe0, 0xc42a8ccee0, 0xc42b6dd714, 0xc, 0xc400000002, 0xc42b719b80, 0xc42b03e070, 0x7fe929f86430, ...)\\n\\tcloud/kubernetes/server/server.go:1869 +0x2fd\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\n\\tcloud/kubernetes/server/server.go:1871 +0xc44\\n\\n\"",
    }
    error running gcloud [container clusters --project=gke-up-c1-3-g1-5-up-clu-n --zone=us-central1-a upgrade jenkins-e2e --cluster-version=1.5.0-beta.2.2+f64c9f2d999ceb --quiet --image-type=gci]; got error exit status 1, stdout "", stderr "Upgrading jenkins-e2e...\n..........done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1480471083550-ca245de4'\n operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/707337012235/zones/us-central1-a/operations/operation-1480471083550-ca245de4'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'cloud-kubernetes::UNKNOWN: client: etcd cluster is unavailable or misconfigured\\ngoroutine 1709333 [running]:\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc42b49d100, 0x34, 0x1, 0x10)\\n\\tcloud/kubernetes/common/errors.go:627 +0x22f\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc428e2c980, 0xc42fb79c20)\\n\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42afacb80, 0x1, 0x1, 0x0, 0x1)\\n\\tcloud/kubernetes/common/errors.go:852 +0x12b\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42b438800, 0xc42ac0fc80, 0xc42dc0e340, 0x3, 0x4, 0x2, 0x4)\\n\\tcloud/kubernetes/common/call.go:130 +0x608\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42a62c040, 0x7fe929f86430, 0xc4230b0ab0, 0xc429a9b1e0, 0xc42abf4280, 0xc42de16a10, 0xc42ac0fc80, 0xc428d56410, 0xc3, 0xc42c396040, ...)\\n\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42af7e480, 0x7fe929f86430, 0xc4230b0ab0, 0xc42ac0fc80, 0x7fe92196d930, 0xc42de169a0, 0xc429a9b1e0, 0xc42abf4280, 0xc42de16a10, 0xc428d56410, ...)\\n\\tcloud/kubernetes/server/deploy.go:1830 +0xdc\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42af7e480, 0x7fe929f86430, 0xc4230b0ab0, 0xc42ac0fc80, 0x7fe92196d930, 0xc42de169a0, 0xc421fa2f60, 0xc429a9b1e0, 0xc42abf4100, 0xc42c396040, ...)\\n\\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, 0x3, ...)\\n\\tcloud/kubernetes/server/server.go:1179 +0x3e5\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, 0x0, ...)\\n\\tcloud/kubernetes/server/server.go:1057 +0x108\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc400000002, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, ...)\\n\\tcloud/kubernetes/server/server.go:943 +0x3d4\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0x2bc7fe0, 0xc42a8ccee0, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0xc400000002, ...)\\n\\tcloud/kubernetes/server/server.go:1877 +0xca\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42ac0fb60, 0xc42a8ccf60, 0x2bc7fe0, 0xc42a8ccee0, 0xc42b6dd714, 0xc, 0xc400000002, 0xc42b719b80, 0xc42b03e070, 0x7fe929f86430, ...)\\n\\tcloud/kubernetes/server/server.go:1869 +0x2fd\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\n\\tcloud/kubernetes/server/server.go:1871 +0xc44\\n'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/707337012235/zones/us-central1-a/clusters/jenkins-e2e/nodePools/default-pool'\n zone: u'us-central1-a'>] finished with error: cloud-kubernetes::UNKNOWN: client: etcd cluster is unavailable or misconfigured\ngoroutine 1709333 [running]:\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc42b49d100, 0x34, 0x1, 0x10)\n\tcloud/kubernetes/common/errors.go:627 +0x22f\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2b800a0, 0xc428e2c980, 0xc42fb79c20)\n\tcloud/kubernetes/common/errors.go:681 +0x1ac\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42afacb80, 0x1, 0x1, 0x0, 0x1)\n\tcloud/kubernetes/common/errors.go:852 +0x12b\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42b438800, 0xc42ac0fc80, 0xc42dc0e340, 0x3, 0x4, 0x2, 0x4)\n\tcloud/kubernetes/common/call.go:130 +0x608\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc42a62c040, 0x7fe929f86430, 0xc4230b0ab0, 0xc429a9b1e0, 0xc42abf4280, 0xc42de16a10, 0xc42ac0fc80, 0xc428d56410, 0xc3, 0xc42c396040, ...)\n\tcloud/kubernetes/server/updater/updater.go:70 +0x693\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42af7e480, 0x7fe929f86430, 0xc4230b0ab0, 0xc42ac0fc80, 0x7fe92196d930, 0xc42de169a0, 0xc429a9b1e0, 0xc42abf4280, 0xc42de16a10, 0xc428d56410, ...)\n\tcloud/kubernetes/server/deploy.go:1830 +0xdc\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42af7e480, 0x7fe929f86430, 0xc4230b0ab0, 0xc42ac0fc80, 0x7fe92196d930, 0xc42de169a0, 0xc421fa2f60, 0xc429a9b1e0, 0xc42abf4100, 0xc42c396040, ...)\n\tcloud/kubernetes/server/deploy.go:1767 +0xb5e\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, 0x3, ...)\n\tcloud/kubernetes/server/server.go:1179 +0x3e5\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, 0x0, ...)\n\tcloud/kubernetes/server/server.go:1057 +0x108\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0xc400000002, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0x2bc7fe0, 0xc42a8ccee0, ...)\n\tcloud/kubernetes/server/server.go:943 +0x3d4\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42b03e070, 0x7fe929f86430, 0xc42de35ad0, 0xc42ac0fc80, 0x2bc7fe0, 0xc42a8ccee0, 0xc429a9b1e0, 0xc42abf4100, 0xc42a532180, 0xc400000002, ...)\n\tcloud/kubernetes/server/server.go:1877 +0xca\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42ac0fb60, 0xc42a8ccf60, 0x2bc7fe0, 0xc42a8ccee0, 0xc42b6dd714, 0xc, 0xc400000002, 0xc42b719b80, 0xc42b03e070, 0x7fe929f86430, ...)\n\tcloud/kubernetes/server/server.go:1869 +0x2fd\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\n\tcloud/kubernetes/server/server.go:1871 +0xc44\n\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421be8e70>: {
        s: "Namespace e2e-tests-cluster-upgrade-u7pm6 is active",
    }
    Namespace e2e-tests-cluster-upgrade-u7pm6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78
@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 1, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-cluster-new/416/

Multiple broken tests:

Failed: [k8s.io] StatefulSet [Slow] [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:147
Nov 29 14:10:09.486: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:929

Issues about this test specifically: #37436

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421c00020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1060

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc421ee3390>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:16, Replicas:7, UpdatedReplicas:7, AvailableReplicas:6, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63616063359, nsec:0, loc:(*time.Location)(0x3cdd160)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616063359, nsec:0, loc:(*time.Location)(0x3cdd160)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63616063359, nsec:0, loc:(*time.Location)(0x3cdd160)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616063326, nsec:0, loc:(*time.Location)(0x3cdd160)}}, Reason:\"NewReplicaSetAvailable\", Message:\"Replica set \\\"nginx-3837372172\\\" has successfully progressed.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:16, Replicas:7, UpdatedReplicas:7, AvailableReplicas:6, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63616063359, nsec:0, loc:(*time.Location)(0x3cdd160)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616063359, nsec:0, loc:(*time.Location)(0x3cdd160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63616063359, nsec:0, loc:(*time.Location)(0x3cdd160)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616063326, nsec:0, loc:(*time.Location)(0x3cdd160)}}, Reason:"NewReplicaSetAvailable", Message:"Replica set \"nginx-3837372172\" has successfully progressed."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1458

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42038cd10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Stateful Set recreate [Slow] should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:507
Nov 29 12:33:31.954: Pod test-pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:462

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Nov 29 13:45:27.430: timeout waiting 15m0s for pods size to be 5
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

@fejta fejta closed this as completed Dec 2, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants