Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephfs: dont set explicit permissions on the volume #2847

Merged
merged 1 commit into from Feb 9, 2022

Conversation

humblec
Copy link
Collaborator

@humblec humblec commented Feb 2, 2022

At present we are node staging with worldwide permissions which is
not correct. We should allow the CO to take care of it and make
the decision

Considering we have set to worldwide permission, we are defaulting to fsgroup = None for CephFS atm, once this is in, we can change to FSGroupChangePolicy to ReadWriteOnceWithFSTpe for CephFS too.

Fixes #2356

Signed-off-by: Humble Chirammal hchiramm@redhat.com

@mergify mergify bot added the component/cephfs Issues related to CephFS label Feb 2, 2022
@humblec
Copy link
Collaborator Author

humblec commented Feb 3, 2022

@Mergifyio rebase

@mergify
Copy link
Contributor

mergify bot commented Feb 3, 2022

rebase

✅ Branch has been successfully rebased

internal/cephfs/nodeserver.go Outdated Show resolved Hide resolved
internal/cephfs/nodeserver.go Outdated Show resolved Hide resolved
internal/cephfs/nodeserver.go Outdated Show resolved Hide resolved
@humblec
Copy link
Collaborator Author

humblec commented Feb 8, 2022

Again ? @nixpanic @Rakshith-R https://jenkins-ceph-csi.apps.ocp.ci.centos.org/blue/organizations/jenkins/k8s-e2e-external-storage-1.21/detail/k8s-e2e-external-storage-1.21/2658/pipeline

[](https://jenkins-ceph-csi.apps.ocp.ci.centos.org/blue/organizations/jenkins/k8s-e2e-external-storage-1.21/detail/k8s-e2e-external-storage-1.21/2658/pipeline#step-69-log-69)[](https://jenkins-ceph-csi.apps.ocp.ci.centos.org/blue/organizations/jenkins/k8s-e2e-external-storage-1.21/detail/k8s-e2e-external-storage-1.21/2658/pipeline#step-69-log-70)Apache Arrow for CentOS 8 - x86_64              122  B/s |  80  B     00:00    

Errors during downloading metadata for repository 'apache-arrow-centos':

  - Status code: 404 for https://apache.jfrog.io/artifactory/arrow/centos/8/x86_64/repodata/repomd.xml[](https://jenkins-ceph-csi.apps.ocp.ci.centos.org/blue/organizations/jenkins/k8s-e2e-external-storage-1.21/detail/k8s-e2e-external-storage-1.21/2658/pipeline#step-69-log-71)[](https://jenkins-ceph-csi.apps.ocp.ci.centos.org/blue/organizations/jenkins/k8s-e2e-external-storage-1.21/detail/k8s-e2e-external-storage-1.21/2658/pipeline#step-69-log-72)[](https://jenkins-ceph-csi.apps.ocp.ci.centos.org/blue/organizations/jenkins/k8s-e2e-external-storage-1.21/detail/k8s-e2e-external-storage-1.21/2658/pipeline#step-69-log-73)[](https://jenkins-ceph-csi.apps.ocp.ci.centos.org/blue/organizations/jenkins/k8s-e2e-external-storage-1.21/detail/k8s-e2e-external-storage-1.21/2658/pipeline#step-69-log-74)[](https://jenkins-ceph-csi.apps.ocp.ci.centos.org/blue/organizations/jenkins/k8s-e2e-external-storage-1.21/detail/k8s-e2e-external-storage-1.21/2658/pipeline#step-69-log-75) (IP: 52.33.92.242)

Error: Failed to download metadata for repo 'apache-arrow-centos': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried

[2/2] STEP 1/6: FROM quay.io/ceph/ceph:v16

Error: error building at STEP "RUN dnf -y install 	librados-devel librbd-devel 	/usr/bin/cc 	make 	git     && true": error while running runtime: exit status 1

make: *** [Makefile:229: image-cephcsi] Error 125

script returned exit code 2

@humblec
Copy link
Collaborator Author

humblec commented Feb 8, 2022

/retest ci/centos/mini-e2e-helm/k8s-1.22

@humblec
Copy link
Collaborator Author

humblec commented Feb 8, 2022

/retest ci/centos/upgrade-tests-rbd

@humblec
Copy link
Collaborator Author

humblec commented Feb 8, 2022

Cool.. Tests are passing after rebase.. so please discard the previous comment :) @nixpanic @Rakshith-R ptal.. thanks !

@humblec humblec self-assigned this Feb 8, 2022
@humblec humblec added this to the release-3.6 milestone Feb 8, 2022
@humblec humblec requested a review from Madhu-1 February 9, 2022 04:59
Copy link
Contributor

@Rakshith-R Rakshith-R left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure how this will affect already created volumes/ new volumes and about FSGroups, I'll let others review this pr.

@humblec
Copy link
Collaborator Author

humblec commented Feb 9, 2022

@nixpanic ptal.. thanks.

@nixpanic
Copy link
Member

nixpanic commented Feb 9, 2022

/retest ci/centos/upgrade-tests-cephfs

@nixpanic
Copy link
Member

nixpanic commented Feb 9, 2022

/retest ci/centos/upgrade-tests-cephfs

Feb  9 12:58:53.130: FAIL: failed to create storageclass: etcdserver: request timed out

@nixpanic
Copy link
Member

nixpanic commented Feb 9, 2022

@Mergifyio refresh

@mergify
Copy link
Contributor

mergify bot commented Feb 9, 2022

refresh

✅ Pull request refreshed

At present we are node staging with worldwide permissions which is
not correct. We should allow the CO to take care of it and make
the decision. This commit also remove `fuseMountOptions` and
`KernelMountOptions` as they are no longer needed

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
@mergify mergify bot merged commit 8f6a7da into ceph:devel Feb 9, 2022
humblec added a commit to humblec/rook that referenced this pull request Feb 11, 2022
```
ReadWriteOnceWithFSType: Indicates that volumes will be examined
to determine if volume ownership and permissions should be modified
to match the pod's security policy. Changes will only occur if the
fsType is defined and the persistent volume's accessModes contains
ReadWriteOnce.
```

In between considering we are giving 0777 permission on nodestage
of cephfs shares, we defaulted to NONE. However giving worldwide
permission to the volume is not the right thing and it has been
fixed in cephfs via ceph/ceph-csi#2847

This commit brings it back to the value which is also in parity
with RBD driver.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
humblec added a commit to humblec/rook that referenced this pull request Feb 11, 2022
```
ReadWriteOnceWithFSType: Indicates that volumes will be examined
to determine if volume ownership and permissions should be modified
to match the pod's security policy. Changes will only occur if the
fsType is defined and the persistent volume's accessModes contains
ReadWriteOnce.
```

In between considering we are giving 0777 permission on nodestage
of cephfs shares, we defaulted to NONE. However giving worldwide
permission to the volume is not the right thing and it has been
fixed in cephfs via ceph/ceph-csi#2847

This commit brings it back to the value which is also in parity
with RBD driver.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
@nixpanic
Copy link
Member

/retest ci/centos/upgrade-tests-cephfs

Feb  9 12:58:53.130: FAIL: failed to create storageclass: etcdserver: request timed out

Expected to be addressed with #2880

mergify bot pushed a commit to rook/rook that referenced this pull request Feb 28, 2022
```
ReadWriteOnceWithFSType: Indicates that volumes will be examined
to determine if volume ownership and permissions should be modified
to match the pod's security policy. Changes will only occur if the
fsType is defined and the persistent volume's accessModes contains
ReadWriteOnce.
```

In between considering we are giving 0777 permission on nodestage
of cephfs shares, we defaulted to NONE. However giving worldwide
permission to the volume is not the right thing and it has been
fixed in cephfs via ceph/ceph-csi#2847

This commit brings it back to the value which is also in parity
with RBD driver.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
(cherry picked from commit 6561bda)
humblec added a commit to humblec/rook that referenced this pull request Mar 1, 2022
```
ReadWriteOnceWithFSType: Indicates that volumes will be examined
to determine if volume ownership and permissions should be modified
to match the pod's security policy. Changes will only occur if the
fsType is defined and the persistent volume's accessModes contains
ReadWriteOnce.
```

In between considering we are giving 0777 permission on nodestage
of cephfs shares, we defaulted to NONE. However giving worldwide
permission to the volume is not the right thing and it has been
fixed in cephfs via ceph/ceph-csi#2847

This commit brings it back to the value which is also in parity
with RBD driver.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
(cherry picked from commit 6561bda)
(cherry picked from commit e324059)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cephfs Issues related to CephFS
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Try to get rid of global permission on cephFS volume and support FSGROUP based mount option
4 participants