New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement a rate-limiting middleware #107

Open
mujx opened this Issue Oct 16, 2016 · 7 comments

Comments

Projects
None yet
3 participants
@mujx
Contributor

mujx commented Oct 16, 2016

Or use an existing one.

The rate limiting tests currenlty in ruma are invalid and should be removed, because they don't contain the concept of frequency.

@kajmagnus

This comment has been minimized.

kajmagnus commented Jan 6, 2017

What does middleware mean here? Could it be, for example, an Nginx module, if you're using Nginx?

Do you use any HTTP server, e.g. Nginx, in front of Ruma? I didn't see any in the docker-compose file, does that mean there is none?

@mujx

This comment has been minimized.

Contributor

mujx commented Jan 6, 2017

By middleware we mean an Iron middleware. Iron is the http framework we use in ruma.

We plan on using nginx as reverse proxy in production so this could be done in nginx also. It would easier (no code) and more robust through nginx in my opinion.

@jimmycuadra

This comment has been minimized.

Member

jimmycuadra commented Jan 6, 2017

I'd be interested to see how rate limiting might work at the Nginx layer. At first thought, it doesn't seem like the right choice, since rate limiting is part of the Matrix spec itself, only applies to certain endpoints, and is expected to give a Matrix-specific response in the case a request is rate limited. That would mean putting core homeserver behavior in Nginx configuration.

@mujx

This comment has been minimized.

Contributor

mujx commented Jan 6, 2017

You are right, the Matrix-specific response complicates things. Also some of the rate limiting should be based on the access tokens, if I understand correctly. I'm not really familiar with the rate limiting options on nginx and how some of these problems might be overcomed.

@kajmagnus

This comment has been minimized.

kajmagnus commented Jan 6, 2017

Thanks for explaining. In my web app, I have two layers of rate limiting:

  1. A layer in Nginx, which is a combination of Nginx directives + a Lua module I've started coding on (but only started a little bit & I don't really know how Lua works). The Lua module limits bandwidth ((oops, now I noticed that this issue title says "rate limiting" not "bandwidth limiting", so I'm a bit off-topic here)), so Google Cloud Engine won't send me a $99999 bill just because a botnet kept downloading an 1 MB image forever during a month (outgoing Google Cloud bandwidth = expensive).

  2. And a more detailed layer in the app server (written in Scala). This layer knows about app server internals. It e.g. understands that the full-text-search endpoint needs to be a lot more rate limited, than the view-page endpoint.

The reason I posted my question above, is I was wondering if it could possibly possibly make sense for me to try to make the Nginx Lua module more "generic" so it could be of use for Ruma too. — Now, after having read the title again and noticed it's about rate limiting, not bandwidth, I think I might have gone a bit off-topic, though.

@jimmycuadra

This comment has been minimized.

Member

jimmycuadra commented Jan 6, 2017

Here's the section on rate limiting from the current version of the spec: https://matrix.org/docs/spec/client_server/r0.2.0.html#rate-limiting. You'll notice that various API endpoints are marked as rate limited also.

@jimmycuadra

This comment has been minimized.

Member

jimmycuadra commented Jan 6, 2017

Probably also worth mentioning that the rate limiting in the spec is there to prevent the Matrix event graph from getting spammed, e.g. a malicious bot trying to flood a room with messages. Rate limiting within the ruma application won't necessarily save the application itself from getting overloaded by such requests, and that's where having rate limiting at the Nginx layer could be useful. By the time we get to real production deployments, we might want to recommend a setup that includes some sort of rate limiting at the Nginx layer for that reason. Of course, that is going into the area of DOS attacks, so it might be a problem beyond the scope of what we should attempt to ship with ruma.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment