Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate limiting #346

Closed
mcqueary opened this issue Sep 1, 2016 · 10 comments
Closed

Rate limiting #346

mcqueary opened this issue Sep 1, 2016 · 10 comments
Assignees
Milestone

Comments

@mcqueary
Copy link

mcqueary commented Sep 1, 2016

In order to support SLAs, billing and 'fairness' in shared deployments, The server should implement rate limiting. Rate limiting would control the rate of ingress to ensure that we can cap the number of msg/sec or bytes/sec at specific limits.

@mcqueary mcqueary added this to the gnatsd-1.0.0 milestone Sep 1, 2016
@Nicolab
Copy link

Nicolab commented Oct 6, 2016

May be using TC (traffic control)

kozlovic added a commit that referenced this issue Dec 13, 2016
Global configuration to limit per-client ingress message rate.
Can be rate_msgs and/or rate_bytes.

Resolves #346
kozlovic added a commit that referenced this issue Dec 13, 2016
Global configuration to limit per-client ingress message rate.
Can be rate_msgs and/or rate_bytes.

Resolves #346
@ghost ghost assigned kozlovic Dec 13, 2016
@ghost ghost added the waffle:needs review label Dec 13, 2016
@mcqueary mcqueary modified the milestones: gnatsd/0.10.0, gnatsd/1.0.0 Mar 21, 2017
@derekcollison
Copy link
Member

We may still chose to add in rate limiting for ingestion into the server, but for now TC sounds like a good production choice. Closing for now.

@bimlas
Copy link

bimlas commented Oct 7, 2022

How can I shape (rate-limit) traffic of Websocket clients? My backend is communicating through Nats protocol, my clients are connecting to Nats server through Websocket connection. I like to limit the number of pubs/subs per client IP (all clients using the same Nats username), but don't want to limit for backend services. Is it possible?

@derekcollison
Copy link
Member

Do you want to limit the clients publishing rate or the backend publishing rate to the clients?

@bimlas
Copy link

bimlas commented Oct 7, 2022

The first one, because I like to prevent DDoS attack.

@derekcollison
Copy link
Member

We have never really had an issue with that per se. We do see folks, especially with JetStream wanting flow control for JetStream consumers, which we provide.

We do not currently have a way for the server to enforce an inbound rate, you can enforce a max message size but not the rate, but again we have not seen that to be an issue in practice. TCP/IP will limit naturally the flow based on connection inbound to the server itself, you can use websockets and compression as well if needed. For the backend services just make sure to run them in a distributed queue group, that way you can trivially scale them at will.

@bimlas
Copy link

bimlas commented Oct 17, 2022

@derekcollison, thanks for the answer, but I still can't see how can I solve this issue:

image

All of the WebSocket clients are using the default credentials, so I cannot separate them by username.

The main problem is that some of the backend services need some time to answer to the request, so I would like to limit the requests per client for a given time range (e.g. 5 messages / second).

A possible option would be to handle the requests serially, but the backend services are running NestJS applications which is answering to the requests via Pub/Sub instead of Req/Res (https://docs.nestjs.com/microservices/nats#request-response), thus serializing the requests is not possible. Even if I would be able to serialize the messages instead of accepting them in asynchronous way, I cannot process the messages per client IP, so all of the users should have to wait for the answer when a burst comes to the Nats server.

How can I solve this? I think I could be able to use JetStream, but I don't see how can it help to rate limit the clients per IP?

EDIT

If I would be able to identify the user IP, I could apply a custom NestJS Throttler module to the backend services (https://docs.nestjs.com/security/rate-limiting#proxies), but since there is no user identification, I cannot accomplish this.

@derekcollison
Copy link
Member

You could have all requests go to a JetStream stream. Each request could look like req.CLIENT_ID, then set MAxMsgsPerSubject to 5 and MAxAge to 1s and DiscardPolicyPerSubject to new.

The clients could still send more then that but the system would reject them and buffer the backends.

@bimlas
Copy link

bimlas commented Oct 18, 2022

Thank you very much! It works as expected but I'm not sure how can I verify if CLIENT_ID is valid? The client can write any random string instead of the ID which lets the client send a lot of messages.

A possible solution would be in my opinion to send a randomly generated session ID to the client which is stored on the server and when the client sends back a request, this ID is included, thus we could check it before accepting the message in Nats (almost the same behaviour as checking a CSRF token). Is there any interceptor already exists for this type of validation?

As I see, it's possible to map subjects pragmatically on the server side (https://docs.nats.io/nats-concepts/subject_mapping#deterministic-subject-token-partitioning). Is it possible to add the ID through it (for example in <server_name>_<client_number> form)?

@derekcollison
Copy link
Member

We don't support auto adding of client id atm, however if the message is being received from a service import, we have a client info header that is attached to the request. This could be helpful here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants