Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Processes require root permissions to write to CSI-S3 volumes mounted in pods under OpenShift #334

Open
srikumar003 opened this issue Mar 6, 2024 · 0 comments
Labels
bug Something isn't working s3 Issues relating to S3/CSI-S3 integration

Comments

@srikumar003
Copy link
Collaborator

srikumar003 commented Mar 6, 2024

@starpit brought this to my attention and I am using his investigation to open this issue. Thanks @starpit

Original issue:

i have everything kinda running in openshift. my s3 (minio, also running in the cluster) dataset mounts to a non-root pod. all good… except that the pod cannot read or write to the mount. writes error with permission denied, and reads do not reflect the content of the bucket.
if instead i run this pod as root, the pod can now read and write as expected.

** Trial 1: Experiments with fsGroupPolicy **

  1. Tried adding the following to pod spec:
spec:
     securityContext:
        fsGroup: 1000
        fsGroupChangePolicy: "OnRootMismatch"

Outcome: Did not work

  1. Tried fsGroupChangePolicy: "Always"
    Outcome: Did not solve the problem

  2. Pin fsGroup to runAsGroup
    Tried:

spec:
     securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
        fsGroupChangePolicy: "OnRootMismatch"

Outcome: Did not solve the problem

Trial 2: Experiments with setting user and group ids
Add 2 new fields to the secret that datashim creates

or uid and gid in dataset -> secret that datashim creates -> csi-s3 automatically?

 type goofysMounter struct {
       bucket          *bucket
       endpoint        string
       region          string
       accessKeyID     string
       secretAccessKey string
       volumeID        string
       readonly        bool
}

my uid and gid options are being passed through:

args: [--endpoint=http://s3.<redacted>:9000/ --type-cache-ttl 1s --stat-cache-ttl 1s --dir-mode 0777 --> file-mode 0777 --uid 2000 --gid 2000

to no avail yet

neither did the below work:

I0305 22:18:42.706261       1 mounter.go:56] Mounting fuse with command: goofys and args: [--endpoint=http://s3.<redacted>:9000/ --type-cache-ttl 1s --stat-cache-ttl 1s --dir-mode 0777 --file-mode 0777 --uid 2000 --gid 2000 --http-timeout 5m -o allow_other,user_id=2000,group_id=2000 --profile=pvc-4d4ad02b-25a4-415f-aea3-38e152f73b09 workdir /var/data/kubelet/pods/3a56abb5-5c38-4a31-bba1-f60b702292d8/volumes/kubernetes.io~csi/pvc-4d4ad02b-25a4-415f-aea3-38e152f73b09/mount]
@srikumar003 srikumar003 added bug Something isn't working s3 Issues relating to S3/CSI-S3 integration labels Mar 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working s3 Issues relating to S3/CSI-S3 integration
Projects
None yet
Development

No branches or pull requests

1 participant