New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Advanced rate limiter #142
Comments
In the Slow Down page, we have this info :
Where I can’t make a dynamic rate limiter ; I didn’t find in the code where those values are set, I see they are read from socket data. The spirit is still max 1 request per second but it seems accepted to proceed with @kepstin’s max 10 requests within 11 seconds. |
Being able to launch around 10 initial requests (I would queue an empty slot first — for the already loaded current page — then 9 actual requests) could greatly improve MERGE HELPOR 2 and PENDING EDITS (associated artists). |
One thing to keep in mind is that you need to maintain a long-term average of 1 request/second even over multiple page loads or refreshes. E.g. what happens if someone opens a bunch of tabs where your script is running in all of them? Unless you can coordinate the rate limit over multiple tabs and multiple page loads, I recommend using a simple 1 req/second method. |
Thanks @kepstin, indeed ! |
A solution to coordinate between multiple pages would be to store the queue in |
It may fix #115. |
I close this now that #115 is fixed. |
I must try to understand this @kepstin advanced rate limiter.
He has left some useful explanations in IRC logs.
My rate limiter queues the requests and launch them at a rate of maximum 1 per second.
His rate limiter would launch 10 simultaneous requests then make sure there is never more than 10 requests launched in 10 seconds, for the remaining queued requests.
It would make it very faster than mine for batches of 10 or less.
I have to find out the actual MB guidelines about rate limiting as kepstin says it would be max 10 requests within 10 seconds but Rate limiting says max 1 request per second (per IP) and I think I have read something like max 22(?) requests within 43(?) seconds in the 503 error slow down pages…
The text was updated successfully, but these errors were encountered: