Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

As a user I should be able to add a rate limit to an API so it is enforced globally #356

Closed
lonelycode opened this issue Jan 16, 2017 · 7 comments · Fixed by #1138
Closed

Comments

@lonelycode
Copy link
Member

As a user I should be able to add a rate limit to an API so it is enforced globally, this should rate-limit access to the API across all tokens, and should supersede a token rate limit.

Functional Requirements:

  • I should be able to set a rate limit on an API
  • I should be able to add a global rate limit to an open API - for example in B2B servie-to-service calls this may be preferrable to a token based approach
  • I should be able to add this rate limit in the API Designer
@lonelycode lonelycode added this to the Release 2.4 milestone Jan 16, 2017
@buger buger self-assigned this Jan 25, 2017
@buger
Copy link
Member

buger commented Jan 25, 2017

and should supersede a token rate limit.

What is the reason behind overriding token rate limit? Initially I was thinking that global rate limit is sort of "default" one, and you can override it if needed.

Or by "supersede" you mean that if token_rate_limit > global_rate_limit, I should use global, eg it is maximum value, otherwise token based?

@lonelycode
Copy link
Member Author

They are different counters, so for example if all users have a rate limit of 100/s and the global limit is 1000/s, then 10 users can go at full speed, however if another user started sending the same 100/s requests, then the aggregate throughput would be more than 1000/s, and all 10 users would be throttled.

This is so that if I am an API owner, and I know my infrastructure will explode at 1100/s, but be ok at 1000/s that I can guarantee that no amount of users concurrently accessing my service can exceed that limit to damage the infra.

In a real case this would be much higher. I see this rate limit affecting everyone equally.

In terms of the code, it basically means that each API has it's own DRL bucket that gets incremented before the user r/l bucket.

@buger
Copy link
Member

buger commented Jan 25, 2017

Clear 👍

@nickReyn
Copy link

@lonelycode lonelycode self-assigned this Sep 28, 2017
@lonelycode lonelycode modified the milestones: Release 2.4.1, Release 2.4 Sep 28, 2017
@lonelycode
Copy link
Member Author

I've moved this into 2.4 because it only requires a small UI change in the dashboard to make live.

@lonelycode
Copy link
Member Author

(cc @lghiur / @ConsM)

@lonelycode
Copy link
Member Author

Relevant PR because it didn't link properly: #1138

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants