Replies: 2 comments
-
No, you cannot currently set different concurrency for different domains. |
Beta Was this translation helpful? Give feedback.
0 replies
-
But you can have multiple crawler instances, one per each domain. You'd have to use different storage instances to isolate their contexts, not sure if we have a python example for that in the docs. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Does Crawlee support per-domain concurrency?
In this example, the first domain (paklap.pk) can't handle much load, but the second domain (centurycomputerpk.com) can.
Does Crawlee allow setting concurrency limits per domain, or is concurrency managed globally?
In Scrapy, this is possible through the download_slot mechanism. I’m wondering if there’s an equivalent in Crawlee.
Beta Was this translation helpful? Give feedback.
All reactions