-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate limiting issues #48
Comments
One more potential issue (I hope it's a bug and not a feature): Let's say you have a rate of 1000 per 60 seconds.
1,997 requests will be made in a period of 4 seconds and all will succeed. I would like to see rate limiting prevent more than X requests over a given Y period. Instead of having the bucket completely emptied when the time elapses, the bucket can be slowly drained every second in order to prevent a burst of double the rate in a short time period. |
Hi,
General question: we only put quota thresholds in the response header because throttle thresholds change so quickly and would be different across parallelised responses. But yes it could be added, we've just not seen the need for it yet.
|
Ah, yes I see your point regarding the bucket emptying. It's an interesting option, would need to investigate further as our throttling mechanism is entirely in Redis to speed things up, not sure if it supports auto decrementing keys. I guess it could be done with a built in Lua script command. Open to suggestions (and PR's) :-)
|
Thanks for the example, you're right, the tests don't catch the off by one issue - to be honest we really need to improve the testing throughout :-S The off by one bug is quite a simple fix, we'll just need to increment the counter when it goes into the key store, that makes the rate limiter behave more rationally. Will add this to the next release. |
Just saw your commit to your fork, want to send a PR? Will merge it in :-) |
#48: Fix off-by-one error with rate limiting
I submitted the pull request. I'll give the bucket draining code some thought and will hopefully submit something soon. |
Merged :-) thanks for that! Yeah, I've been thinking about bucket draining too - the current setup does However, we could implement a more robust time-window based method that On Friday, March 13, 2015, Thomas Peters notifications@github.com wrote:
|
Rate limiter has now been switched to use a rolling window. Still supports multiple processes (transaction MULTI) and can't be gamed by straddling the TTL since it uses a rolling record of request history in a sorted set, so straddling TTL will fail because request count follows the rate limit period window. |
According to Access Control (v1.5),
allowance
andrate
should be set to the same value.I'm seeing two issues:
allowance
is never actually used. AFAICT, it's only ever decremented (session_manager.go).rate
has an off-by-one error. If you setrate
to5
andper
to5
, this sounds like it should be: "max of 5 requests every 5 seconds". However, this will only allow 4 requests to succeed and will fail on the fifth request.The rate limiting test doesn't catch this because it never validates the second request. See gateway_test.go. If you do the following, you'll see it fails:
General question: If you are using rate limiting, should it return in the response headers? Currently I only see the quota information returned with a
X-Ratelimit-Remaining
header which isn't accurate as usually your rate limiting has tighter thresholds that your quota. Restated: should there be separate response headers: one for quota, and one for rate limiting?Edit: Add test example.
The text was updated successfully, but these errors were encountered: