Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate Limit proposal #2039

Merged
merged 4 commits into from
Jun 24, 2021
Merged

Rate Limit proposal #2039

merged 4 commits into from
Jun 24, 2021

Conversation

lobkovilya
Copy link
Contributor

Proposal on the new Rate Limit policy (or maybe Rate Limiting?)

Signed-off-by: Ilya Lobkov <ilya.lobkov@konghq.com>
Signed-off-by: Ilya Lobkov <ilya.lobkov@konghq.com>
Signed-off-by: Ilya Lobkov <ilya.lobkov@konghq.com>
@lobkovilya lobkovilya requested a review from a team as a code owner May 24, 2021 15:19
@subnetmarco
Copy link
Contributor

max becomes max_burst_allowed and when not specified by default it always matches the connection (or requests).

Since it is a local rate limiting we have to set it on the destination side. It could be challenging to support
`source` selector, but still quite possible for HTTP. We already set header `x-kuma-tags` to identify the source for
FaultInjections, I believe the same approach will work for RateLimit. For TCP most likely we’ll have a limitation
on `source` selector unless it’s possible to somehow use SANs but still this is mTLS only.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@subnetmarco

What is the main use case that users and customers are looking for?

Is it about throttling web so it won't send more than 100 req/s to backend if app is misbehaving or is it about protecting backend that it does not receive more than X req/s?

If former, then we can go with Ilya's proposal, but if it is about protecting backend then it should be

type: RateLimit
name: rt-1
mesh: default
selectors:
- match:
    kuma.io/service: backend # apply on backend for any traffic
conf:
  tcp:
    connections: 100 # how many connections are allowed per 'period'
    period: 1s
  http: ...

To give more examples with the current approach

---
type: RateLimit
name: rt-1
mesh: default
sources:
- match:
    kuma.io/service: web
destinations:
- match:
    kuma.io/service: backend
conf:
  http:
    requests: 100 # how many requests are allowed per 'period'
    period: 1s
---
type: RateLimit
name: rt-1
mesh: default
sources:
- match:
    kuma.io/service: another-client
destinations:
- match:
    kuma.io/service: backend
conf:
  http:
    requests: 100 # how many requests are allowed per 'period'
    period: 1s

web AND another-client are able to execute 100 req/s.

type: RateLimit
name: rt-1
mesh: default
sources:
- match:
    kuma.io/service: '*'
destinations:
- match:
    kuma.io/service: backend
conf:
  http:
    requests: 100 # how many requests are allowed per 'period'
    period: 1s

any instance of a service can execute 100 req/s

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the approach with sources and destinations can't have a policy like this:

type: RateLimit
name: rt-1
mesh: default
sources:
- match:
    kuma.io/service: *
destinations:
- match:
    kuma.io/service: backend
conf:
  http:
    requests: 100 # how many requests are allowed per 'period'
    period: 1s

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can, but again. This is different than

type: RateLimit
name: rt-1
mesh: default
selectors:
- match:
    kuma.io/service: backend # apply on backend for any traffic
conf:
  tcp:
    connections: 100 # how many connections are allowed per 'period'
    period: 1s
  http: ...

applied on the backend.

In your case, any instance can send up to 100 req/s to backend.
In the case above, one instance of backend can process 100 req/s.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but I proposed to apply rate-limiting on the destination side, selecting by sources might be tricky but I think it's possible to implement at least for HTTP

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jakubdyszkiewicz @subnetmarco I reflected this discussion as 3 different options in the proposal

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rate limiting is for protection of the destination. Although the destination service may want to assign different limits to different sources. Use cases are:

  • backend will only handle 100 req/s from any source.
  • backend will only handle 100 req/s from any source, but a specific backend-app source should be able to make 200req/s.

Signed-off-by: Ilya Lobkov <ilya.lobkov@konghq.com>
append: false
```

Same as [Design 1](#Design 1), but rate-limiter is configured on the `destination` side. This allows user to protect
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is a local rate limit on each backend instance?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's correct

@subnetmarco
Copy link
Contributor

This also needs to support rate limiting on external services, therefore in this case applied at the source and not at the destination.

@subnetmarco
Copy link
Contributor

subnetmarco commented Jun 1, 2021

Also, we need to support multiple backends for rate limiting. In this case is local but we may want to store redis in the future, like:

type: RateLimit
name: rt-1
mesh: default
sources:
- match:
    kuma.io/service: edge-gateway
destinations:
- match:
    kuma.io/service: httpbin
conf:
  backends:
    local: {}
    redis:
      address: 1.1.1.1
      port: 8888
  http:
    requests: 100
    period: 1s
    status: 401

@nickolaev nickolaev mentioned this pull request Jun 2, 2021
6 tasks
@nickolaev
Copy link
Contributor

This also needs to support rate limiting on external services, therefore in this case applied at the source and not at the destination.

The implementation that I have been working on in #2083 is done on the destination.

@jpeach
Copy link
Contributor

jpeach commented Jun 6, 2021

This also needs to support rate limiting on external services, therefore in this case applied at the source and not at the destination.

The implementation that I have been working on in #2083 is done on the destination.

IMHO the different places rate limits can be applied are useful for different operational reasons. Since it's likely that we will end up with different kinds of rate limit (e.g global, egress, ingress), we just need to be clear about it in the API design and documentation.

@jpeach
Copy link
Contributor

jpeach commented Jun 24, 2021

Is this done since #2083 is merged?

@jakubdyszkiewicz jakubdyszkiewicz merged commit fc09e53 into master Jun 24, 2021
@jakubdyszkiewicz jakubdyszkiewicz deleted the docs/proposal-rate-limit branch June 24, 2021 08:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants