Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Rate exceed" triggering for frequent builds #28

Closed
nesta219 opened this issue Feb 27, 2020 · 4 comments
Closed

"Rate exceed" triggering for frequent builds #28

nesta219 opened this issue Feb 27, 2020 · 4 comments

Comments

@nesta219
Copy link

Hi there,

My company uses a monorepo to build/deploy multiple projects at once. As such, we run 10+ checks using this action simultaneously, and if there are multiple PRs building at one time, we often seen failures along the lines of:

Run aws-actions/aws-codebuild-run-build@v1.0.0
*****STARTING CODEBUILD*****
##[error]Rate exceeded
*****CODEBUILD COMPLETE*****

My suspicion is that this is actually the AWS api rate-limiting us on the backend as it makes repeated, simultaneous API calls from the same access key. This issue could probably be solved by either increasing the sleep time during waitForBuildEndTime or simply adding a try/catch block to avoid the process dying if the api call returns a rate limit error.

@seebees
Copy link
Collaborator

seebees commented Feb 28, 2020

Looking at this,
my guess is that we are polling too fast.
The tool pings every 5 seconds:
https://github.com/aws-actions/aws-codebuild-run-build/blob/master/code-build.js#L67

Keeping things simple,
I think just pushing this to 60 seconds would be fine.

@nesta219
Copy link
Author

@seebees that sounds great. I think another solid idea would be to simply wrap the call in a try/catch and then retry, rather than have the entire action fail because one api call was rate-limited

@seebees
Copy link
Collaborator

seebees commented Mar 2, 2020

Just sticking in a try/catch is simple, but scary.
Because it has no back off.
So as clients ramp up,
there is never enough breathing room to recover.

I'll throw some comments on the PR,
because the best solution combines this:
Longer wait, slowdown on error.

@seebees seebees closed this as completed in 8adff21 Mar 5, 2020
@seebees
Copy link
Collaborator

seebees commented Mar 5, 2020

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants