Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Downward API volume should provide podname as non-root with fsgroup [Feature:FSGroup] [Volume] #15003

Closed
bparees opened this issue Jul 2, 2017 · 5 comments
Assignees
Labels
component/kubernetes component/storage kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/P1

Comments

@bparees
Copy link
Contributor

bparees commented Jul 2, 2017

/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-origin5_pXiR/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:83
Expected error:
    <*errors.errorString | 0xc4216fac30>: {
        s: "expected pod \"metadata-volume-dcecacd1-5f01-11e7-ba1e-0e66c60935d2\" success: <nil>",
    }
    expected pod "metadata-volume-dcecacd1-5f01-11e7-ba1e-0e66c60935d2" success: <nil>
not to have occurred
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-origin5_pXiR/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Seen in: https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/1208/

@derekwaynecarr
Copy link
Member

a similar issue was here:
kubernetes/kubernetes#42980

it was sourced back to a fix in kubelet panic:
kubernetes/kubernetes#42927

which we have in origin now:
#13653

we need to triage the node logs as it should not be from the original flake upstream why this failed.

@derekwaynecarr
Copy link
Member

per discussion with @stevekuznetsov there is no way to access the node log to debug this flake further.

i did review the docker logs and saw nothing of significance.

the node logs in future flakes will be available with openshift-eng/aos-cd-jobs#389

@derekwaynecarr
Copy link
Member

the flake happened on gce, and the node logs were not yet avaialble on gce failures...

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 13, 2018
@stevekuznetsov
Copy link
Contributor

Doesn't seem like the problem still exists.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/kubernetes component/storage kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/P1
Projects
None yet
Development

No branches or pull requests

6 participants