-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Throttling: Rate exceeded #5637
Comments
Hi @danielfariati, thanks for reporting this. We will update this issue when there is movement. |
We hit this issue regularly and it is getting really annoying 🤨 Last build 2 of 10 stacks failed with the "Throttling: Rate exceeded" error |
This is becoming a bigger and bigger issue for my team as well-- we are now forced to stagger deployments that could otherwise be in parallel. Would be a big quality of life improvement to have this fixed. |
@Silverwolf90 that does not sound ideal at all and we should be providing a better experience natively. bumping this up to a |
Just to add another voice to this This is affecting my team as well. In particular we have several CDK apps which creates over 100 stacks each |
Found this error because we are also experiencing this. BTW there is no mention in CloudFormation or CloudWatchLogs. This looks like an API that is not integrated with the rest of AWS.
|
picking this task up |
Hey @shivlaks one qq, is your task going to be to expose the retry delay parameter or to allow async deploys (I.e. call |
I'm still exploring the options, but some of the things we are considering include:
The downside of bailing on the stack monitoring is subsequent deploys will not be initiated by the CDK. i.e. if stack B had a dependency that required stack A to be deployed. We can't start that deployment until A has completed. That would not be possible if we stopped monitoring. This would affect wildcard deployments and any scenario where we can't reason about the status of the stack without polling. Handling rate limiting more gracefully is a precursor to attempting parallel deployments. |
We know the Directed Acyclic Graph of stack dependencies, we could support bailing on terminal nodes in that graph because we don’t care about their status (in the context of it blocking future actions), though we wouldn’t be able to offer displaying stack outputs for bailed deployments. |
@richardhboyd - good point. it's another option to add to the list of things to consider. I wonder if it would be useful feature to allow retrieving stack outputs as a command. i.e. poll all the specified stacks and write their outputs to a specified location |
What about avoiding polling altogether while scaling to a large number of stacks in parallel - have a CDK service endpoint which CDK clients would subscribe to. Once a stack is finished deploying, the client will get (event-driven) notification and continue to the next stack. |
We're still seeing this; any news? Here is the common stack trace from cdk 1.44 in case it helps; since it is a retryable error, why doesn't the API simply ... retry:
|
I don't know if this is related, but I've started seeing similar throttling errors in a single stack when trying to create an IAM role within a stack:
|
We experienced the same:
|
I have been seeing similar today, i think there may be an AWS issue as i am not creating many roles and haven't seen this issue on the same stack + account previously. |
There was an IAM issue overnight but it appears to be resolved or is in the process of resolving now |
The CDK (particularly, `cdk deploy`) might crash after getting throttled by CloudFormation, after the default configured 6 retries has been reached. This changes the retry configuration of the CloudFormation client (and only that one) to use a custom backoff function that will allow up to `100` retries to be made before failing (until it reaches an error that is either not declared as retryable; or that is not a throttling error); and will exponentially back-off (with a maximum wait time between two attempts of 1 minute). This should allow heavily parallel deployments on the same account and region to avoid getting killed by a throttle; but will reduce the responsiveness of the progress UI. Fixes #5637
I'm getting this error in CDK v2
Given that the error is retryable, maybe retry it before blowing up? |
I guess one way to solve this for good would be to create a utility API Gateway Websocket (the ones that API gateway supports natively), as part of the Bootstrap stack and subscribe to events on that one through the CDK CLI. That would help CDK drop the polling approach and would help the CLI get immediate feedback when a stack is deployed or fails to be deployed + a bonus - no throttling (since there are no longer any direct AWS API calls involved. Side note: A third party library cdk-watch already does something similar under the hood. Talking about the CLI + Websocket API integration for realtime updates. |
For a p1 issue this has been open an awfully long time; and is also something we're now starting to experience in CDK v2: CDK Finished | |
I originally thought my rate limiting issue was from this but it actually ended up being rate limits reached between cloudformation itself and the services it was interacting with. For example I had a ton of independent lambda functions being created at the same time which caused rate limit errors between cloudformation and lambda. After adding some explicit dependencies between the lambdas it reduced the amount of them being created in parallel and eliminated the rate limiting issues. There were some other resources I had to do the same with like API Gateway models and methods. I'm mentioning this in case someone else in here might have came to the wrong conclusion like I did. |
Ran into something very similar as @calebpalmer, where I was creating ~20 lambdas with a custom log retention ( In my case, CDK created a Coincidentally there's a non-adjustable Lambda quota limit ( |
I just ran into this when running multiple stack creations in parallel - RDS, ECS, EKS. The stack creations take too long as it is - is there a way to increase the retries to avoid the stack failures? |
issue persisting. is there a way to get rid of it ? |
Just tried again updating 35 stacks in parallel and the issue still persists. Our end goal is to be able to deploy all stacks at once (~80, and growing... it was 27 when I first opened this ticket). The error looks like this:
Any news on this? The issue is tagged as p1, but it seems that nobody is looking into it. What we're currently doing is limiting 15 stacks in parallel, but this is becoming a huge problem, as our number of stacks is growing... |
currently, we are running 11 deployments in parallel. we faced the issue the first time that Rate exceeded. day by day the parallel deployment count has been increasing. any alternate solution is there is right now? because we don't want to fail our pipeline for 1 2 deployments. Thanks. |
I also had this issue on a stack that created about 160 CloudWatch Canaries. I "resolved" it by using nested stacks, so the resources per individual stacks remained under 500 and within each stack I also used the depends_on statement to limit the requests from reaching the Rate exceeded limit. |
Can we please get some attention on this issue? My team is suffering from this. |
Please bump in priority. This issue is blocking us as well. |
Also seeing this issue. Like @clifflaschet, my problem seems related to introducing a log retention policy to existing stack with lambdas. |
super annoying, facing this too with large graphql api |
I cannot believe that the following issue has not been referenced from this one: #8257 The solution to this problem has been implemented. I had to use higher numbers for |
This has been plaguing our deploys for ages! Thank you for following up
about #8257!
…On Mon, Aug 7, 2023 at 12:24 AM Shane Argo ***@***.***> wrote:
I cannot believe that the following issue has not been referenced from
this one:
#8257 <#8257>
The solution to this problem has been implemented. I had to use higher
numbers for maxRetries and base (I used 20 and 1000ms respectively) than
the user in this issue, but I managed to get my project to deploy.
—
Reply to this email directly, view it on GitHub
<#5637 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAOKSZBDY4WYYCOK4ZXHPDXUCJ4HANCNFSM4KCQZ2TQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
@shaneargo #8257 / #8258 is a great point solution specifically for rate exceeded errors caused by the creation of a bunch of However, other messages in this issue seem to indicate it's not necessarily related 1:1 to log retentions. As far as I understand, it could be any AWS service API <> CDK interaction that is returning a rate exceeded error. |
Hi there, I also ran into this issue while trying to create a monitoring stack involving 500+ cloudwatch alarms. I split my stack using nested stacks to get around the 500 resource limit, but then started to face this error with no clear workaround. Is there any mitigation for this today? |
I'm seeing this issue while deploying only ~20 stacks concurrently. |
We experience this issue frequently while deploying only a single stack. Huge frustration. We also have log retention policies for each lambda function. |
We are experiencing this when deploying a single stack of EKS with 24 node groups |
As a workaround for this kind of problems I have found the solution to add a lot of depends_on for each node.
This will lead to functions being deployed as well as updated one after another. Slow but steady 😆 |
Also, running into this issue. Applied a similar solution as domfie but would be nice if this could just be resolved by the cdk directly. |
This is my understanding, so please don't 🔥 me 😉 This issue has to do with CDK CLI being throttled because it hits the CloudFormation API too often and there is no way to override the defaults. This happens more often if you deploy multiple stacks in parallel. The other rate limiting problem that folks are seeing is related to Custom resources in the stack itself. Primarily, with log retention. CloudWatch Logs has really low API limits (like 5 reqs/sec). You can fix this by using the |
When deploying multiple CDK stacks simultaneously, a throttling error occurs when trying to check the status of the stack.
The CloudFormation runs just fine, but CDK returns an error because the rate limit was exceeded.
We're using typescript.
The issue #1647 says that this error was resolved, but looking at the fix (#2053), it only increased the default number of retries, just making it less likely to happen.
Is there at least a way to override the base
retryOptions
in a CDK project? If there is, I can just override it in my side so the error does not occurs.Even if there is, I think that this should be solved in the base project.
I don't think CDK should ever fail because of rate limiting while trying to check the stack status in CloudFormation, as it does not affect the end-result (the deployment of the stack).
Use Case
One of our applications have one CDK stack per customer (27 in total). When there's an important fix that needs to be sent to every customer, we run the
cdk deploy
command for each stack, simultaneously, via a Jenkins pipeline.Error Log
The text was updated successfully, but these errors were encountered: