New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ratelimiter #1211
Ratelimiter #1211
Conversation
// down. | ||
if res.StatusCode == http.StatusTooManyRequests { | ||
l.SetLimit(detune(l.Limit())) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may be reading this wrong, but when/if the downstream recovers, would the limit be "stuck" at a lower than optimal rate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
I'm not sure what to do to re-ramp the limit. Using SetLimitAt
would result in the limit getting similarly stuck, by my thinking. Perhaps a second Limiter with token representing an increase to the limit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not so familiar with the request patterns or how a certain node can become overwhelmed, but would it make sense to have a second case (or an else case) that increases the Limit (retune) when status == <good status>
and then taking the Min(newLimit, lim)
to ensure you don't go over the max? (In general, detune should reduce faster than retune increases it).
In the long run it seems like using a client to mitigate against server errors would lead to a different loadbalancing strategy to the RoundTrip
, but that seems like a discussion for a different time.
Here's output from
|
NB: Failing because of dependency on unpublished |
in comparison to... ? |
on
|
You see a consecutive speed up? Interesting it got faster. |
What do you mean "consecutive?" |
Do you see a speedup over more then just one iteration of the comparison? Im not sure why there would be a speedup |
Repeat runs of this branch were around 1:40. |
This change adds a rate limiter for HTTP requests. It's purposefully unconfigurable for ease of use. Signed-off-by: Hank Donnay <hdonnay@redhat.com>
This also sets up a cookie jar correctly so that any rate limiting schemes based on Cookie headers will actually work. Signed-off-by: Hank Donnay <hdonnay@redhat.com>
Signed-off-by: Hank Donnay <hdonnay@redhat.com>
Adds a ratelimiter and more correct Client usage, and wires it up where it seems appropriate.