Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ratelimiter #1211

Merged
merged 3 commits into from May 13, 2021
Merged

Ratelimiter #1211

merged 3 commits into from May 13, 2021

Conversation

hdonnay
Copy link
Member

@hdonnay hdonnay commented Mar 22, 2021

Adds a ratelimiter and more correct Client usage, and wires it up where it seems appropriate.

// down.
if res.StatusCode == http.StatusTooManyRequests {
l.SetLimit(detune(l.Limit()))
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may be reading this wrong, but when/if the downstream recovers, would the limit be "stuck" at a lower than optimal rate?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

I'm not sure what to do to re-ramp the limit. Using SetLimitAt would result in the limit getting similarly stuck, by my thinking. Perhaps a second Limiter with token representing an increase to the limit?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not so familiar with the request patterns or how a certain node can become overwhelmed, but would it make sense to have a second case (or an else case) that increases the Limit (retune) when status == <good status> and then taking the Min(newLimit, lim) to ensure you don't go over the max? (In general, detune should reduce faster than retune increases it).

In the long run it seems like using a client to mitigate against server errors would lead to a different loadbalancing strategy to the RoundTrip, but that seems like a discussion for a different time.

@hdonnay
Copy link
Member Author

hdonnay commented Apr 26, 2021

Here's output from time, dumping to /dev/null:

go run ./cmd/clairctl -c config.yaml.sample export-updaters > /dev/null  175.86s user 6.42s system 204% cpu 1:29.10 total

@hdonnay
Copy link
Member Author

hdonnay commented Apr 26, 2021

NB: Failing because of dependency on unpublished claircore changes. Will rebase with a new version when those changes are merged.

@ldelossa
Copy link
Contributor

Here's output from time, dumping to /dev/null:

go run ./cmd/clairctl -c config.yaml.sample export-updaters > /dev/null  175.86s user 6.42s system 204% cpu 1:29.10 total

in comparison to... ?

@hdonnay
Copy link
Member Author

hdonnay commented Apr 26, 2021

on main:

go run ./cmd/clairctl -c config.yaml.sample export-updaters > /dev/null  335.48s user 9.87s system 249% cpu 2:18.41 total

@ldelossa
Copy link
Contributor

on main:

go run ./cmd/clairctl -c config.yaml.sample export-updaters > /dev/null  335.48s user 9.87s system 249% cpu 2:18.41 total

You see a consecutive speed up? Interesting it got faster.

@hdonnay
Copy link
Member Author

hdonnay commented Apr 26, 2021

What do you mean "consecutive?"

@ldelossa
Copy link
Contributor

ldelossa commented Apr 26, 2021

What do you mean "consecutive?"

Do you see a speedup over more then just one iteration of the comparison? Im not sure why there would be a speedup

@hdonnay
Copy link
Member Author

hdonnay commented Apr 26, 2021

Repeat runs of this branch were around 1:40.
Repeat runs of main were around 2:10.

This change adds a rate limiter for HTTP requests. It's purposefully
unconfigurable for ease of use.

Signed-off-by: Hank Donnay <hdonnay@redhat.com>
This also sets up a cookie jar correctly so that any rate limiting
schemes based on Cookie headers will actually work.

Signed-off-by: Hank Donnay <hdonnay@redhat.com>
Signed-off-by: Hank Donnay <hdonnay@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants