New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure cluster [309f06...] failed 172 builds, 34 jobs, and 12 tests over 1 days #56913

Closed
fejta-bot opened this Issue Dec 7, 2017 · 4 comments

Comments

Projects
None yet
7 participants
@fejta-bot

fejta-bot commented Dec 7, 2017

Failure cluster 309f063888c9e4ba20d2

Error text:
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/resource_quota.go:715
Expected error:
    <*errors.errorString | 0xc4202646f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/resource_quota.go:786
Failure cluster statistics:

12 tests failed, 34 jobs failed, 172 builds failed.
Failure stats cover 1 day time range '6 Dec 2017 02:42 UTC' to '7 Dec 2017 02:42 UTC'.

Top failed tests by jobs failed:
Test Name Jobs Failed
[sig-scheduling] ResourceQuota should verify ResourceQuota with best effort scope. 21
[sig-scheduling] ResourceQuota should verify ResourceQuota with terminating scopes. 20
[sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] 19
Top failed jobs by builds failed:
Job Name Builds Failed Latest Failure
ci-kubernetes-e2e-gci-gce-ip-alias 15 7 Dec 2017 01:23 UTC
ci-kubernetes-e2e-gci-gce-etcd3 15 7 Dec 2017 02:01 UTC
ci-kubernetes-e2e-gci-gce-proto 14 7 Dec 2017 01:47 UTC

/assign @mml @ncdc @derekwaynecarr

Rationale for assignments:
Assignee or SIG area Owns test(s)
mml [sig-scheduling] ResourceQuota should verify ResourceQuota with best effort scope.
ncdc [sig-scheduling] ResourceQuota should verify ResourceQuota with terminating scopes.; [sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a secret.
derekwaynecarr [sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
sig/api-machinery [sig-scheduling] ResourceQuota should verify ResourceQuota with best effort scope.; [sig-scheduling] ResourceQuota should verify ResourceQuota with terminating scopes.; [sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
sig/cli [sig-cli] Kubectl client [k8s.io] Kubectl taint [Serial] should update the taint on a node

Current Status

@php-coder

This comment has been minimized.

Show comment
Hide comment
@php-coder

php-coder Dec 12, 2017

Contributor

/sig scheduling
/sig storage

Contributor

php-coder commented Dec 12, 2017

/sig scheduling
/sig storage

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Mar 12, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented Mar 12, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Apr 11, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot commented Apr 11, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot May 11, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

fejta-bot commented May 11, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment