-
Notifications
You must be signed in to change notification settings - Fork 528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to create access point: AccessPointLimitExceeded: You have reached the maximum number of access points #517
Comments
{ |
@chandrab-on EFS has a hard limit of 120 access points per file system according to resource quotas in on the official documentation. This includes the access points which are not created by the driver. |
@kbasv is there a way EFS CSI Driver can have a different implementation type without using the Access Points? Seems like the hard limit of 120 access points can be extremely limiting for using it in Kubernetes since that would essentially mean you're limited to 120 pvcs right? Then you'll need to create another EFS with another storage class to scale further... I think it defeats the purpose of the scalability aspect of EFS and will require manual intervention or using another completely different provisioner to take advantage of EFS. This is the current limitation we've faced for my project's use case with using EFS CSI Driver at the moment |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@artificial-aidan: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think this issue should be reopened. Even if we might not get an implementation that does not use an AP for each PV(C), at least the APs for released PV(C)s should be deleted, IMO. |
@jonathanrainer is proposing a directory-based approach (not using access points) in PR #732, you may want to check it out! But in the meantime, the official |
@Ashley-wenyizha @wongma7 @jsafrane this issue is critical (and still present), can one of you please reopen it? |
I second reopening this issue. @Ashley-wenyizha @wongma7 |
I'm in support of reopening this issue as well @Ashley-wenyizha @wongma7. We just ran headfirst into the 120 AP limit and have little recourse other than migrating all our workloads to a different EFS/NFS driver. If this project does not want to support alternate provisioning methods, the documentation should at least make it very clear that a maximum of 120 PVs per storage class are supported so that others who require a large number of PVs know to look somewhere else. |
Why is this closed? This can be critical. |
Hey, https://aws.amazon.com/about-aws/whats-new/2023/01/amazon-efs-1000-access-points-file-system/ They upped it to 1000 last night. I asked my rep because I did not see it in my service quotas, and I got this further info: I’ve communicated with the EFS PM on this. All existing and newly created file systems now automatically support up to 1,000 EFS Access Points, no limit increase request is needed from your end. He has pointed out that what you’re seeing is a visual lag and will get this updated as soon as possible. Hope this helps everyone |
This is still a problem. 1000 volumes isn't that many |
/kind bug
What happened?
aws-efs-csi-driver throws and error saying that "Failed to create access point: AccessPointLimitExceeded: You have reached the maximum number of access points" after migrating to aws-efs-csi-driver version to v1.3.2
Access Points are not being deleted all PVCs in the namespace are deleted
What you expected to happen?
Access Points needs ro deleted when corresponding PVC is deleted
How to reproduce it (as minimally and precisely as possible)?
Install the driver using helm:
helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
helm repo update
helm upgrade --install aws-efs-csi-driver --namespace kube-system aws-efs-csi-driver/aws-efs-csi-driver
Anything else we need to know?:
Environment
AWS EKS
kubectl version
):v1.20.4
v1.3.2
The text was updated successfully, but these errors were encountered: