Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CephFS] Permission denied for mounted directory except last created pod #3562

Closed
lwj5 opened this issue Dec 2, 2022 · 11 comments
Closed

[CephFS] Permission denied for mounted directory except last created pod #3562

lwj5 opened this issue Dec 2, 2022 · 11 comments
Labels
component/cephfs Issues related to CephFS question Further information is requested wontfix This will not be worked on

Comments

@lwj5
Copy link

lwj5 commented Dec 2, 2022

Describe the bug

When creating a deployment with a ReadWriteMany cephFS PVC, only the last created pod has access to the mounted folder. The earlier pods will show Permission denied when cd to that directory.

If deployment is set to privilege, this does not happen.
I've looked at this #1097 but there is no denial in SELinux and I've set container_use_cephfs=1 as well to test.

Environment details

  • Image/version of Ceph CSI driver : ceph-csi-cephfs
  • Helm chart version : 3.7.2
  • Kernel version : 4.18.0-372.19.1.el8_6.x86_64
  • Mounter used for mounting PVC (for cephFS its fuse or kernel. for rbd its
    krbd or rbd-nbd) : kernel
  • Kubernetes cluster version : RKE2 v1.24.7
  • Ceph cluster version : 17.2.5

Steps to reproduce

Steps to reproduce the behavior:

  1. Create a deployment with 1 replica with cephFS RWX PVC
  2. Exec shell and cd to the mount folder (OK)
  3. Scale to 2
  4. cd to the mounted folder in the first replica - Permission denied

Actual results

Permission denied on any earlier pods.

Expected behavior

All pods can access the volume

Logs

If the issue is in PVC mounting please attach complete logs of below containers.

  • csi-rbdplugin/csi-cephfsplugin and driver-registrar container logs from
    plugin pod from the node where the mount is failing.

csi-cephfsplugin logs for node of 1st replica

I1202 05:34:02.819369 1370917 utils.go:195] ID: 26723 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1202 05:34:02.819405 1370917 utils.go:206] ID: 26723 GRPC request: {}
I1202 05:34:02.819479 1370917 utils.go:212] ID: 26723 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1202 05:34:02.820063 1370917 utils.go:195] ID: 26724 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC call: /csi.v1.Node/NodeStageVolume
I1202 05:34:02.820147 1370917 utils.go:206] ID: 26724 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/7a23165c3ab2cb52daa7f7aa50347035b4ba41d619699170bf3608a6296f1bd9/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"b6ea4a39-c41d-4b8e-8b99-ffe944308f7b","fsName":"data","storage.kubernetes.io/csiProvisionerIdentity":"1658237899921-8081-cephfs.csi.ceph.com","subvolumeName":"csi-vol-938957cf-076c-11ed-9858-363114aae053","subvolumePath":"/volumes/csi/csi-vol-938957cf-076c-11ed-9858-363114aae053/6ba366d4-a6b8-4b50-a6af-68e6037c5aa0"},"volume_id":"0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053"}
I1202 05:34:02.824109 1370917 omap.go:88] ID: 26724 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 got omap values: (pool="cfs_metadata", namespace="csi", name="csi.volume.938957cf-076c-11ed-9858-363114aae053"): map[csi.imagename:csi-vol-938957cf-076c-11ed-9858-363114aae053 csi.volname:pvc-74206ef3-4bcd-40e6-af26-40ec2b994a23]
I1202 05:34:02.866998 1370917 volumemounter.go:126] requested mounter: , chosen mounter: kernel
I1202 05:34:02.867083 1370917 nodeserver.go:247] ID: 26724 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 cephfs: mounting volume 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 with Ceph kernel client
I1202 05:34:02.869739 1370917 cephcmds.go:105] ID: 26724 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 command succeeded: modprobe [ceph]
I1202 05:34:02.945491 1370917 cephcmds.go:105] ID: 26724 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 command succeeded: mount [-t ceph 192.168.20.102,192.168.20.103,192.168.20.104:/volumes/csi/csi-vol-938957cf-076c-11ed-9858-363114aae053/6ba366d4-a6b8-4b50-a6af-68e6037c5aa0 /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/7a23165c3ab2cb52daa7f7aa50347035b4ba41d619699170bf3608a6296f1bd9/globalmount -o name=admin,secretfile=/tmp/csi/keys/keyfile-3317337405,mds_namespace=data,_netdev]
I1202 05:34:02.945607 1370917 nodeserver.go:206] ID: 26724 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 cephfs: successfully mounted volume 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 to /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/7a23165c3ab2cb52daa7f7aa50347035b4ba41d619699170bf3608a6296f1bd9/globalmount
I1202 05:34:02.945642 1370917 utils.go:212] ID: 26724 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC response: {}
I1202 05:34:02.946477 1370917 utils.go:195] ID: 26725 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1202 05:34:02.946527 1370917 utils.go:206] ID: 26725 GRPC request: {}
I1202 05:34:02.946619 1370917 utils.go:212] ID: 26725 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1202 05:34:02.947246 1370917 utils.go:195] ID: 26726 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1202 05:34:02.947299 1370917 utils.go:206] ID: 26726 GRPC request: {}
I1202 05:34:02.947382 1370917 utils.go:212] ID: 26726 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1202 05:34:02.948193 1370917 utils.go:195] ID: 26727 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1202 05:34:02.948219 1370917 utils.go:206] ID: 26727 GRPC request: {}
I1202 05:34:02.948274 1370917 utils.go:212] ID: 26727 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1202 05:34:02.948761 1370917 utils.go:195] ID: 26728 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC call: /csi.v1.Node/NodePublishVolume
I1202 05:34:02.948854 1370917 utils.go:206] ID: 26728 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/7a23165c3ab2cb52daa7f7aa50347035b4ba41d619699170bf3608a6296f1bd9/globalmount","target_path":"/var/lib/kubelet/pods/c3f729fe-2c56-41bf-aa53-b70e5ee17f10/volumes/kubernetes.io~csi/pvc-74206ef3-4bcd-40e6-af26-40ec2b994a23/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"b6ea4a39-c41d-4b8e-8b99-ffe944308f7b","fsName":"data","storage.kubernetes.io/csiProvisionerIdentity":"1658237899921-8081-cephfs.csi.ceph.com","subvolumeName":"csi-vol-938957cf-076c-11ed-9858-363114aae053","subvolumePath":"/volumes/csi/csi-vol-938957cf-076c-11ed-9858-363114aae053/6ba366d4-a6b8-4b50-a6af-68e6037c5aa0"},"volume_id":"0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053"}
I1202 05:34:02.953715 1370917 cephcmds.go:105] ID: 26728 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 command succeeded: mount [-o bind,_netdev /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/7a23165c3ab2cb52daa7f7aa50347035b4ba41d619699170bf3608a6296f1bd9/globalmount /var/lib/kubelet/pods/c3f729fe-2c56-41bf-aa53-b70e5ee17f10/volumes/kubernetes.io~csi/pvc-74206ef3-4bcd-40e6-af26-40ec2b994a23/mount]
I1202 05:34:02.953739 1370917 nodeserver.go:467] ID: 26728 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 cephfs: successfully bind-mounted volume 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 to /var/lib/kubelet/pods/c3f729fe-2c56-41bf-aa53-b70e5ee17f10/volumes/kubernetes.io~csi/pvc-74206ef3-4bcd-40e6-af26-40ec2b994a23/mount
I1202 05:34:02.953758 1370917 utils.go:212] ID: 26728 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC response: {}
...
I1202 05:34:49.041393 1370917 utils.go:195] ID: 26738 GRPC call: /csi.v1.Node/NodeGetVolumeStats
I1202 05:34:49.041456 1370917 utils.go:206] ID: 26738 GRPC request: {"volume_id":"0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053","volume_path":"/var/lib/kubelet/pods/c3f729fe-2c56-41bf-aa53-b70e5ee17f10/volumes/kubernetes.io~csi/pvc-74206ef3-4bcd-40e6-af26-40ec2b994a23/mount"}

csi-cephfsplugin logs for node of 2nd replica

I1202 05:35:07.470145 3042812 utils.go:195] ID: 32331 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1202 05:35:07.470171 3042812 utils.go:206] ID: 32331 GRPC request: {}
I1202 05:35:07.470223 3042812 utils.go:212] ID: 32331 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1202 05:35:07.470712 3042812 utils.go:195] ID: 32332 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC call: /csi.v1.Node/NodeStageVolume
I1202 05:35:07.470825 3042812 utils.go:206] ID: 32332 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/7a23165c3ab2cb52daa7f7aa50347035b4ba41d619699170bf3608a6296f1bd9/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"b6ea4a39-c41d-4b8e-8b99-ffe944308f7b","fsName":"data","storage.kubernetes.io/csiProvisionerIdentity":"1658237899921-8081-cephfs.csi.ceph.com","subvolumeName":"csi-vol-938957cf-076c-11ed-9858-363114aae053","subvolumePath":"/volumes/csi/csi-vol-938957cf-076c-11ed-9858-363114aae053/6ba366d4-a6b8-4b50-a6af-68e6037c5aa0"},"volume_id":"0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053"}
I1202 05:35:07.474928 3042812 omap.go:88] ID: 32332 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 got omap values: (pool="cfs_metadata", namespace="csi", name="csi.volume.938957cf-076c-11ed-9858-363114aae053"): map[csi.imagename:csi-vol-938957cf-076c-11ed-9858-363114aae053 csi.volname:pvc-74206ef3-4bcd-40e6-af26-40ec2b994a23]
I1202 05:35:07.482558 3042812 volumemounter.go:126] requested mounter: , chosen mounter: kernel
I1202 05:35:07.482620 3042812 nodeserver.go:247] ID: 32332 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 cephfs: mounting volume 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 with Ceph kernel client
I1202 05:35:07.484552 3042812 cephcmds.go:105] ID: 32332 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 command succeeded: modprobe [ceph]
I1202 05:35:07.574130 3042812 cephcmds.go:105] ID: 32332 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 command succeeded: mount [-t ceph 192.168.20.102,192.168.20.103,192.168.20.104:/volumes/csi/csi-vol-938957cf-076c-11ed-9858-363114aae053/6ba366d4-a6b8-4b50-a6af-68e6037c5aa0 /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/7a23165c3ab2cb52daa7f7aa50347035b4ba41d619699170bf3608a6296f1bd9/globalmount -o name=admin,secretfile=/tmp/csi/keys/keyfile-1273640436,mds_namespace=data,_netdev]
I1202 05:35:07.574179 3042812 nodeserver.go:206] ID: 32332 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 cephfs: successfully mounted volume 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 to /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/7a23165c3ab2cb52daa7f7aa50347035b4ba41d619699170bf3608a6296f1bd9/globalmount
I1202 05:35:07.574212 3042812 utils.go:212] ID: 32332 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC response: {}
I1202 05:35:07.574793 3042812 utils.go:195] ID: 32333 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1202 05:35:07.574834 3042812 utils.go:206] ID: 32333 GRPC request: {}
I1202 05:35:07.574907 3042812 utils.go:212] ID: 32333 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1202 05:35:07.575274 3042812 utils.go:195] ID: 32334 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1202 05:35:07.575300 3042812 utils.go:206] ID: 32334 GRPC request: {}
I1202 05:35:07.575350 3042812 utils.go:212] ID: 32334 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1202 05:35:07.575762 3042812 utils.go:195] ID: 32335 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1202 05:35:07.575782 3042812 utils.go:206] ID: 32335 GRPC request: {}
I1202 05:35:07.575833 3042812 utils.go:212] ID: 32335 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1202 05:35:07.576324 3042812 utils.go:195] ID: 32336 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC call: /csi.v1.Node/NodePublishVolume
I1202 05:35:07.576401 3042812 utils.go:206] ID: 32336 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/7a23165c3ab2cb52daa7f7aa50347035b4ba41d619699170bf3608a6296f1bd9/globalmount","target_path":"/var/lib/kubelet/pods/68b1501c-e845-4f21-9abd-fb587d1da658/volumes/kubernetes.io~csi/pvc-74206ef3-4bcd-40e6-af26-40ec2b994a23/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"b6ea4a39-c41d-4b8e-8b99-ffe944308f7b","fsName":"data","storage.kubernetes.io/csiProvisionerIdentity":"1658237899921-8081-cephfs.csi.ceph.com","subvolumeName":"csi-vol-938957cf-076c-11ed-9858-363114aae053","subvolumePath":"/volumes/csi/csi-vol-938957cf-076c-11ed-9858-363114aae053/6ba366d4-a6b8-4b50-a6af-68e6037c5aa0"},"volume_id":"0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053"}
I1202 05:35:07.579683 3042812 cephcmds.go:105] ID: 32336 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 command succeeded: mount [-o bind,_netdev /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/7a23165c3ab2cb52daa7f7aa50347035b4ba41d619699170bf3608a6296f1bd9/globalmount /var/lib/kubelet/pods/68b1501c-e845-4f21-9abd-fb587d1da658/volumes/kubernetes.io~csi/pvc-74206ef3-4bcd-40e6-af26-40ec2b994a23/mount]
I1202 05:35:07.579698 3042812 nodeserver.go:467] ID: 32336 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 cephfs: successfully bind-mounted volume 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 to /var/lib/kubelet/pods/68b1501c-e845-4f21-9abd-fb587d1da658/volumes/kubernetes.io~csi/pvc-74206ef3-4bcd-40e6-af26-40ec2b994a23/mount
I1202 05:35:07.579713 3042812 utils.go:212] ID: 32336 Req-ID: 0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-938957cf-076c-11ed-9858-363114aae053 GRPC response: {}
I1202 05:35:08.533967 3042812 utils.go:195] ID: 32337 GRPC call: /csi.v1.Node/NodeGetCapabilities
I1202 05:35:08.534006 3042812 utils.go:206] ID: 32337 GRPC request: {}
I1202 05:35:08.534076 3042812 utils.go:212] ID: 32337 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I1202 05:35:08.534697 3042812 utils.go:195] ID: 32338 GRPC call: /csi.v1.Node/NodeGetVolumeStats
I1202 05:35:08.534739 3042812 utils.go:206] ID: 32338 GRPC request: {"volume_id":"0001-0024-b6ea4a39-c41d-4b8e-8b99-ffe944308f7b-0000000000000001-24f9e555-7130-11ed-b974-e61cfb3e91eb","volume_path":"/var/lib/kubelet/pods/d64490a8-95fb-440f-870f-bd7f39134959/volumes/kubernetes.io~csi/pvc-511dae86-f948-4d13-a289-f2e70b21792c/mount"}
I1202 05:35:08.535454 3042812 utils.go:212] ID: 32338 GRPC response: {"usage":[{"available":53494153216,"total":53687091200,"unit":1,"used":192937984},{"total":297220,"unit":2,"used":297221}]}

Additional context

PVC in question is pvc-74206ef3-4bcd-40e6-af26-40ec2b994a23
ID of first pod c3f729fe-2c56-41bf-aa53-b70e5ee17f10
ID of second pod 68b1501c-e845-4f21-9abd-fb587d1da658

See #1097

@Rakshith-R
Copy link
Contributor

@Rakshith-R Rakshith-R added question Further information is requested component/cephfs Issues related to CephFS labels Dec 2, 2022
@lwj5
Copy link
Author

lwj5 commented Dec 2, 2022

This is the deployment used for the test. Let me know what you would like changed

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
...
  name: ceph
  namespace: test
spec:
  replicas: 2
...
  template:
    spec:
      affinity: {}
      containers:
      - command:
        - sleep
        - "10000"
        image: debian:bullseye-slim
        imagePullPolicy: IfNotPresent
        name: container
        securityContext:
          allowPrivilegeEscalation: false
          capabilities: {}
          privileged: false
          readOnlyRootFilesystem: false
          runAsUser: 1001
        volumeMounts:
        - mountPath: /ceph
          name: vol-igx3b
...
      securityContext:
        fsGroup: 1001
      volumes:
      - name: vol-igx3b
        persistentVolumeClaim:
          claimName: cephfs

@Rakshith-R
Copy link
Contributor

@humblec
Can please take a look at this once?

@jthiltges
Copy link

We're seeing identical symptoms to the description above. For our case, it looks like SELinux category changes are resposible for the permission denial.

A ReadWriteMany cephfs volume on the first pod looks fine:

cms-jovyan@example:~$ ls -alZ /mnt
total 0
drwxrwxrwx. 10 root       root  system_u:object_r:container_file_t:s0:c31,c859  8 Dec  7 10:57 .
dr-xr-xr-x.  1 root       root  system_u:object_r:container_file_t:s0:c31,c859 50 Dec  7 18:09 ..
drwxr-xr-x.  3 cms-jovyan 11265 system_u:object_r:container_file_t:s0:c31,c859  3 Dec  6 18:41 densenet_onnx
drwxr-xr-x.  2 cms-jovyan 11265 system_u:object_r:container_file_t:s0:c31,c859  2 Dec  7 17:09 inception_graphdef

After a second pod mounts the volume, the first pod loses access:

cms-jovyan@example:~$ ls -alZ /mnt
ls: cannot open directory '/mnt': Permission denied

After ignoring the dontaudit rules (semodule -DB) we see a denial in the logs.

type=AVC msg=audit(1670436948.985:78042): avc:  denied  { read } for  pid=1851459 comm="ls" name="/" dev="ceph" ino=1099511627786 scontext=system_u:system_r:container_t:s0:c31,c859 tcontext=system_u:object_r:container_file_t:s0:c136,c663 tclass=dir permissive=0

After running setenforce 0 on the host system, we can again access the mount. Setting container_use_cephfs did not help.

cms-jovyan@example:~$ ls -alZ /mnt
total 0
drwxrwxrwx. 10 root       root  system_u:object_r:container_file_t:s0:c136,c663  8 Dec  7 10:57 .
dr-xr-xr-x.  1 root       root  system_u:object_r:container_file_t:s0:c31,c859  50 Dec  7 18:09 ..
drwxr-xr-x.  3 cms-jovyan 11265 system_u:object_r:container_file_t:s0:c136,c663  3 Dec  6 18:41 densenet_onnx
drwxr-xr-x.  2 cms-jovyan 11265 system_u:object_r:container_file_t:s0:c136,c663  2 Dec  7 17:09 inception_graphdef

I suspect when the second pod starts up, it causes a relabeling and changes the SELinux categories on the contents, resulting in a denial for the first pod.

@github-actions
Copy link

github-actions bot commented Jan 6, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the wontfix This will not be worked on label Jan 6, 2023
@lwj5
Copy link
Author

lwj5 commented Jan 7, 2023

Not stale

@github-actions github-actions bot removed the wontfix This will not be worked on label Jan 7, 2023
@github-actions
Copy link

github-actions bot commented Feb 6, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the wontfix This will not be worked on label Feb 6, 2023
@lwj5
Copy link
Author

lwj5 commented Feb 13, 2023

still present

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Feb 13, 2023

@lwj5 this will be a problem with the SELinux relabelling on the cephfs mounts. When you create the second pod, the cephfs mountpoint will get relabelled with the second pod SELinux labels, and the first pod will start getting permission denied error. You must ensure that all the application pods using the same cephfs PVC should use the same SELinux labels.

@lwj5
Copy link
Author

lwj5 commented Feb 13, 2023

Thanks @Madhu-1 for your solution and @jthiltges for the diagnostic, this means that a static SELinux level must be set for every deployment.

I wish there could be an better way. But nonetheless, thanks for the input.

@lwj5 lwj5 closed this as completed Feb 13, 2023
@jthiltges
Copy link

Thank you all, and I appreciate the info, though this is disappointing news. Having to set pods to the same category weakens the security benefits of SELinux.

It would be less surprising if a ReadWriteMany mode could be treated like a Docker volume with :z, resulting in a shared content label.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cephfs Issues related to CephFS question Further information is requested wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

4 participants