-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't mount s3 bucket. (Permission denied) #173
Comments
Same problem with latest EKS, IPv6 and bottlerocket AMIs. Thought it was related to IPv6 stack, but maybe not. #158 (comment) |
Same problem with latest EKS (1.29) and bottlerocket AMIs. |
I tried this on a bottlerocket EKS cluster running both 1.28 and 1.29 (upgraded from 1.27). Everything seems to be working as expected on my clusters, so here's a few things to check:
I took these steps from this knowledge base, which has some more details: https://repost.aws/knowledge-center/eks-troubleshoot-oidc-and-irsa. Also the documentation for IRSA might be helpful: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html The original report says it was working on 1.27, so there might be something else going on here, but these are the first things to check. If that still doesn't fix the issue, try to verify that the IRSA credentials getting into the driver container with a command like this: |
Hello @jjkr note: We are using [Mountpoint for Amazon S3 CSI Driver] add-on, and service account: s3-csi-driver-sa has been deployed on kube-system namespace. |
@jjkr Thank you for the answer! But I have checked all that things previousely and unfortunately they looks good..
|
Same issue |
same issue.
|
So, guys, I want to say sorry.. My issue solved. I had not correct OIDC provider url.. I will close this bug. On version 1.28 it works as expected. |
@jjkr In my case it was the SA's anotation - it was completely missing. IDK why it was not applied. Maybe it has smth to do with the fact that SA annotations are used in the template but not declared in the values.yaml... P.S. a note to other poor souls spending hours on debugging this thing: after changing SA or roles, the s3 driver pods in kube-system needed a restart for changes to take effect |
I have the same issue. But when I replace the trust relation ship field of the iam role. With exactly serviceaccount name system:serviceaccount:kube-system:s3-csi-driver-s instead of system:serviceaccount:kube-system:s3-csi-*. it works. |
Thank you!!! Works after changing to s3-csi-driver-sa |
Hey @ZhouLihua @hongmeizwex sorry for the inconvenience. Seems like our EKS add-on documentation suggests using wildcard match in trust policy. We'll update it to recommend using exact match (i.e., It currently recommends using wildcard match with |
I used stringEquals for match. Here is the whole trust entity, {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::xxxxxxxxx:oidc-provider/oidc.eks.ap-southeast-1.amazonaws.com/id/62709B3E9E9210140779D854D9B1FB6E"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.ap-southeast-1.amazonaws.com/id/62709B3E9E9210140779D854D9B1FB6E:sub": "system:serviceaccount:kube-system:s3-csi-driver-sa",
"oidc.eks.ap-southeast-1.amazonaws.com/id/62709B3E9E9210140779D854D9B1FB6E:aud": "sts.amazonaws.com"
}
}
}
]
} |
/kind bug
Problem not exist in Kubernetes version 1.27
Hello! I have a service account with a role that contains the following policy:
My PV:
My PVC:
But I encounter this problem in the logs:
The text was updated successfully, but these errors were encountered: