Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and
privacy statement. We’ll occasionally send you account related emails.
Already on GitHub?
to your account
It's seems to me that such behaviour is undocumented - scraper will be in back out state once there is over 5M requests, as in
where the default value is always used.
I am reaching a deadlock scenario that I have over 5M requests in the slot, then since enqueue request will check for backout, new request can never be completed.
I can submit a PR if it make sense to extract to settings, say SCRAPER_SLOT_MAX_ACTIVE_SIZE
The text was updated successfully, but these errors were encountered:
Sorry, something went wrong.
@vincentlaucy Did changing this value fixed your deadlock? Do you remember whether it really a deadlock or just a severe slow down?
Closing due to a lack of feedback.
Successfully merging a pull request may close this issue.