New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure cluster [c5c4ad...] failed 98 builds, 12 jobs, and 1 tests over 1 days #58233

Closed
fejta-bot opened this Issue Jan 13, 2018 · 1 comment

Comments

Projects
None yet
3 participants
@fejta-bot

fejta-bot commented Jan 13, 2018

Failure cluster c5c4ad0c213091f419da

Error text:
error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cos-image-validation --zone=us-central1-f --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --skip="\[Flaky\]|\[Serial\]" --test_args=--feature-gates=KubeletConfigFile=true --generate-kubelet-config-file=true --test-timeout=1h10m0s --images=cos-stable-63-10032-71-0-p --image-project=gke-node-images --instance-metadata=user-data<test/e2e_node/jenkins/cos-init-disable-live-restore.yaml,gci-update-strategy=update_disabled: exit status 1
Failure cluster statistics:

1 tests failed, 12 jobs failed, 98 builds failed.
Failure stats cover 1 day time range '12 Jan 2018 02:39 UTC' to '13 Jan 2018 02:39 UTC'.

Top failed tests by jobs failed:
Test Name Jobs Failed
Node Tests 12
Top failed jobs by builds failed:
Job Name Builds Failed Latest Failure
ci-kubernetes-e2enode-cosstable1-k8sstable1-default 10 13 Jan 2018 02:24 UTC
ci-kubernetes-e2enode-cosbeta-k8sstable3-default 10 13 Jan 2018 02:13 UTC
ci-kubernetes-e2enode-cosbeta-k8sstable2-serial 9 13 Jan 2018 00:55 UTC

Current Status

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Jan 13, 2018

Contributor

@fejta-bot: There are no sig labels on this issue. Please add a sig label.

A sig label can be added by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Contributor

k8s-ci-robot commented Jan 13, 2018

@fejta-bot: There are no sig labels on this issue. Please add a sig label.

A sig label can be added by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fejta fejta closed this Jan 28, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment