New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSI Driver ignores fsGroup
and mounts with root as permission
#80
Comments
This is reproducible out-of-the-box on LKE. A simple pod with a persistent volume claim fails to run on LKE, but works fine on different clusters apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fsgrouptest
labels:
run: fsgrouptest
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: fsgrouptest
name: fsgrouptest
spec:
containers:
- name: alpine
image: alpine
name: fsgrouptest
command:
- /bin/sh
- -c
- "id; touch /data/foo; sleep 3600"
volumeMounts:
- mountPath: /data
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: fsgrouptest
securityContext:
runAsUser: 65500
runAsGroup: 65500
fsGroup: 65500
The output of the failed container is
On other clusters (obviously, using other PV provisioners) this works by ensuring that the
|
This can be fixed by adding a parameter to the StorageClass objects. Unfortunately they're not mutable once created so on existing clusters you'll need to create new storage classes for both of the Linode-provided ones.
For me, using these storage classes has fixed the |
As far as I can tell, this should have been fixed in the newer releases - I do see the
I've installed the csi-driver using the helm chart. Closing the issue for now, feel free to re-open if this is still a problem and we'll investigate. |
Bug Reporting
Expected Behavior
When I set a pod's
securityContext.fsGroup
then the mounted volume should mount with that group as ownerActual Behavior
typically the device gets mounted with root user and group ownership, and any subdirectories are mounted with existing ownership.
Steps to Reproduce the Problem
an example of the base Traefik helm chart deployment.yaml:
as you can see, there is a
securityContext.fsGroup
there at the bottom set to65532
, usually this would mount the volumes with that user id as the owner, however the resulting permissions looks more like this:which is consistent with my findings in that it's an inconsistent application of the folder permissions
Additional Notes
according to the documentation at https://kubernetes-csi.github.io/docs/support-fsgroup.html#supported-modes
the
fsType
needs to be set in the PV in order forfsGroup
settings to apply properly, but it doesn't seem like this is set on PV creation:under
csi
there should be a anfsType
according to the current spec:https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#persistent-volumes
The text was updated successfully, but these errors were encountered: