Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request throttling #110

Closed
Ziinc opened this issue May 19, 2020 · 7 comments
Closed

Request throttling #110

Ziinc opened this issue May 19, 2020 · 7 comments
Assignees

Comments

@Ziinc
Copy link
Collaborator

Ziinc commented May 19, 2020

This feature would involve rate limiting of requests made by a spider, such as X requests per min.

@oltarasenko
Copy link
Collaborator

Request limiting is achieved by the number of workers atm. Could you explain why it does not meet your needs?

@Ziinc
Copy link
Collaborator Author

Ziinc commented May 19, 2020

Yes currently I drop the worker count to one, but it still averages 60~70 requests per minute, which is still a tad too high for my liking. That is over 3600 requests per hour, which would likely be flagged out in anomaly-based firewall systems.

So far I haven't had much issue, but it would be nice to have granular control over the requests/min.

Perhaps tag this as a "nice-to-have"?

@oltarasenko oltarasenko self-assigned this May 19, 2020
@oltarasenko
Copy link
Collaborator

Wow. I can't get more than 50 rpm from two workers on the CrawlyUI demo for an example.

I was not expecting it to be a problem, but indeed, it should be fixed.
A worker does a request once per 300 microseconds (https://github.com/oltarasenko/crawly/blob/master/lib/crawly/worker.ex#L11). Usually, the HTTP part is a bottleneck here. However, we can make it configurable. Let me implement this quickly, so you could have it sooner.

Do you need it fast? I can do 0.10.1 for the case tomorrow.

@Ziinc
Copy link
Collaborator Author

Ziinc commented May 19, 2020

It isn't urgent, no need to rush it. Its just something I noticed and was on my mind since we had the discussion in Jan about how requests were fetched #39 (comment) as this type of throttling customization could be achieved through a a "pipeline" module between the fetching and then datastorage portion in the diagram.

For example, I could do things like randomize the throttle rate, or base the throttle rate on some calculation.

But yea, for some reason my machine makes the requests quite quickly.

@oltarasenko
Copy link
Collaborator

@Ziinc Actually I don't want to have the flexibility of assigning a given speed to a given worker. It will produce unpredictable results when different workers will have different speeds. So it will be hard to reason why something is faster and something is slower.

Currently, my mind suggests me to hardcode the worker's speed at some value. For all workers. E.g. we can have 5 requests per minute from a worker at max. Or even 1 request per minute from a worker. What do you think?

@oltarasenko
Copy link
Collaborator

@Ziinc I have made this: #111

Hopefully, it could improve your case.

@Ziinc
Copy link
Collaborator Author

Ziinc commented May 21, 2020

Many thanks, so adjusting the request rate will be based on # of workers? an interesting approach. Will review the PR

@Ziinc Ziinc closed this as completed May 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants