You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The log below shows the time lapsed in seconds. As I go from 38s mark to 40s mark, requests start getting blocked. My understanding according to the log below is that my request rate isn't exceeding 2 or 3 per second. Why do I see requests getting blocked in that case if the ratelimit should allow 5 per second?
Secondly, once I start getting a 403, I've to wait for 1 second (or whatever period was set in rate) to resume. Is it possible to ignore the requests once 403 is raised until next successful request goes through (something like what this issue talks about #11)
P.S. Using Django's default cache backend i.e. 'django.core.cache.backends.locmem.LocMemCache'
Without higher precision in the log, this is what I believe is happening:
Your first request isn't at 00:34:38.000, so requests 1-5 are within one second. Then, since you're not waiting a full second after the 5th request, you're running into the second issue you describe.
There are two ways to deal with this. One is what happens right now: if requests keep coming in they keep getting blocked, because the TTL on the cache key is continuously pushed back.
The other is to treat the period as a window. So if the rate is 5/s, and someone is making 10 req/s, constantly, half of those will work, and half will get rate limited. If it's 60/m and they're making 10 req/s, they'll hit the limit after 6 seconds, but then 54 second later they can make another 60 requests.
I disliked that initially, but I've seen schemes like that used more in practice and have come around—or at least I'm more neutral now. The challenge is that you don't want all the windows to elapse at the same time, because then you get everyone who has been ratelimited free at the same time, which creates a thundering herd problem. So the scheme has to include some mechanism for staggering the windows.
I'm using the following settings with the ratelimit decorator to allow 5 requests per second.
The log below shows the time lapsed in seconds. As I go from
38s
mark to40s
mark, requests start getting blocked. My understanding according to the log below is that my request rate isn't exceeding 2 or 3 per second. Why do I see requests getting blocked in that case if the ratelimit should allow 5 per second?Secondly, once I start getting a 403, I've to wait for 1 second (or whatever period was set in rate) to resume. Is it possible to ignore the requests once 403 is raised until next successful request goes through (something like what this issue talks about #11)
P.S. Using Django's default cache backend i.e.
'django.core.cache.backends.locmem.LocMemCache'
The text was updated successfully, but these errors were encountered: