Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Permission denied for non-root user #4

Open
volldream opened this issue Feb 16, 2023 · 9 comments
Open

Permission denied for non-root user #4

volldream opened this issue Feb 16, 2023 · 9 comments

Comments

@volldream
Copy link

I went through this recipe here and I could mount the S3 bucket in worker nodes. but the problem is that only the root user has access to the folder. I tried to change the configuration several times but with no success. I think one more option should be added to the goofys to allow all the users to have access to the folder. any idea?

@volldream volldream changed the title Permission denied for none root user Permission denied for non-root user Feb 16, 2023
@vsoch
Copy link

vsoch commented Feb 20, 2023

@volldream I followed the steps too but didn't see any (existing) files in my bucket - did you make any changes to the default tutorial?

@vsoch
Copy link

vsoch commented Feb 20, 2023

E.g., what did you put in place of "system:serviceaccount:my-namespace:my-service-account"? I was putting the namespace I wanted the volume created in, and then the service account is just default, but I'm wondering if the service account needs to be something else, and the namespace the same the mounted pods are in? That makes it hard for an operator to use this that wants total control of namespaces.

@vsoch
Copy link

vsoch commented Feb 20, 2023

If I inspect the pods in the otomount namespace I get the ARN is invalid, e.g.,

$ kubectl logs -n otomount s3-mounter-7hvc5
2023/02/20 05:33:04.033074 s3.ERROR code=WebIdentityErr msg=failed to retrieve credentials, err=ValidationError: Request ARN is invalid
        status code: 400, request id: fdce9303-01a2-49b3-b8cb-be910781d849

2023/02/20 05:33:04.033118 main.ERROR Unable to access 'flux-operator-storage': WebIdentityErr: failed to retrieve credentials
caused by: ValidationError: Request ARN is invalid
        status code: 400, request id: fdce9303-01a2-49b3-b8cb-be910781d849
2023/02/20 05:33:04.033124 main.FATAL Mounting file system: Mount: initialization failed

@volldream
Copy link
Author

@volldream I followed the steps too but didn't see any (existing) files in my bucket - did you make any changes to the default tutorial?

First went through default steps in the tutorial and I could mount the folder in nodes and pods, but the problem was that only the root user could see the list of files or write into the folder

@volldream
Copy link
Author

E.g., what did you put in place of "system:serviceaccount:my-namespace:my-service-account"? I was putting the namespace I wanted the volume created in, and then the service account is just default, but I'm wondering if the service account needs to be something else, and the namespace the same the mounted pods are in? That makes it hard for an operator to use this that wants total control of namespaces.

the namespace should be the one that the otomount service account has been created and the service account name should be s3-mounter

@vsoch
Copy link

vsoch commented Feb 20, 2023

@volldream if I create everything in "otomount" is it OK if my pods are in a different namespace? They are created by a separate operator, and so the pvc/pv would be in this second namespace too.

@vsoch
Copy link

vsoch commented Feb 21, 2023

Maybe I gave the wrong identifier for the ARN (we only need to provide a portion?) Here is what I'm seeing for the serviceaccount and annotations:

$ kubectl describe serviceaccount -n otomount s3-mounter
Name:                s3-mounter
Namespace:           otomount
Labels:              app.kubernetes.io/managed-by=Helm
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::633XXXXXXXXXX:policy/kubernetes-s3-access
                     meta.helm.sh/release-name: s3-mounter
                     meta.helm.sh/release-namespace: otomount
Image pull secrets:  <none>
Mountable secrets:   s3-mounter-token-rndbb
Tokens:              s3-mounter-token-rndbb
Events:              <none>

I'm trying to create the PV/PVC in a different namespace - is that this issue? If so, how does this cross namespaces?

@vsoch
Copy link

vsoch commented Mar 2, 2023

I wanted to follow up here - we got everything working but there were a few issues in the tutorial you linked above @volldream. If you look at the "aws cli" step 2 here: https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html there were a few missing steps. What worked for us is to use the eksctl to create the iam service (the other tab for step 2), and then we saved the populated helm chart to a yaml, and wound up tweaking the goofys bind options too. We documented our full steps here: https://flux-framework.org/flux-operator/deployment/aws.html#run-snakemake-with-a-shared-filesystem and note that is likely to change as the operator is developed. So TLDR: I think when you deploy with eksctl and/or use an operator in another namespace there are more steps than outlined in that doc! I can only guess it has to do with crossing namespace or using an operator because the example worked for you. Finally, note that a public bucket is obviously not ideal - for anyone that tries to reproduce that please limit the IAM policy to your account running things.

Thank you kindly for the help @volldream !

@volldream
Copy link
Author

@vsoch Great work, I read your documentation and that is helpful, In my case, the problem was the non-root user did not have access to read or write to the mounted directory. and I have raised a PR for customizing the pod command for gootys that you can find here: #5. I am glad that the solution works for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants