Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wait(n=1) would exceed context deadline #359

Closed
gfrid opened this issue Jun 3, 2024 · 1 comment
Closed

Wait(n=1) would exceed context deadline #359

gfrid opened this issue Jun 3, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@gfrid
Copy link

gfrid commented Jun 3, 2024

Describe the bug
My solution consist of 36 containers when i preform Helm upgrade of the solution I get rpc error: code = Unknown desc = us-west-2: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline"

To Reproduce
Deploy many constrainers at once (above 15 at least) at same time in AWS EKS

Steps to reproduce the behavior:
Install AWS EKS 1.28, 1.29
Install the latest CIS drivers and run mega deployment
Run mega deployment with many containers at once - each container has at least 10 mounts from different secret manager secrets

Do you also notice this bug when using a different secrets store provider (Vault/Azure/GCP...)? Yes/No
Yes

If yes, the issue is likely with the k8s Secrets Store CSI driver, not the AWS provider. Open an issue in that repo.

Expected behavior
Volumes should mount

Environment:
EKS 1.29

Additional context
Full error : Warning FailedMount 2m8s (x2 over 4m) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod production/helm-maste │ │ r-chart-services-576d879f7d-b7692, err: rpc error: code = Unknown desc = us-west-2: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline

@gfrid gfrid added the bug Something isn't working label Jun 3, 2024
@jbct
Copy link

jbct commented Jun 3, 2024

Hi @gfrid. This is related to kubernetes client throttling that occurs in the Go libraries. See #136 for example. To address parts of this, we've added the ability to configure qps and burst limits in our provider, but you may be throttled by the upper-level driver as well. Since this is occurring with other vendors, you may want to create an enhancement with them. Closing as duplicate of 136.

@jbct jbct closed this as completed Jun 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants