-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
benchmarks? #45
Comments
No, it doesn't. You can check that with the CPU usage in your machine while you crawl. Crawling speed depends on many factors, not only your machine capabilities. It depends also on the settings you are using for the job, and on the target server. |
i would rephrase my question, what difference would i get if i do the same crawling with pure scrapy and scrapy using portia |
There can be of course a small CPU usage cost, but that is not the usual case (unless the target site has very very big pages, and even on that case probably the cost is not important). As I said, you can check CPU usage by yourself. Also, we crawl at much higher speeds than you get, using scrapely/slybot (portia). Again, crawling speed depends on many other factors, which does not came from a difference between pure scrapy and portia. I would start by checking settings/middleware differences between your runs with scrapy and with portia. |
There are some benchmarks in slybot that compare parsel extraction speed to scrapely extraction speed:
These tests are extracting from 300 pages extracting 10-15 items from each page. As you can see the part that slows down scrapely compared to parsel is the parsing which is written in python compared to lxml which is written in C. There was an attempt made to convert the scrapely parser from python to cython but unfortunately this didn't work on all pages so it was disabled. With the cython parser scrapely was faster than parsel so we need to look at fixing the bugs in cython parser and re-enabling it. |
Hi
Is there a benchmark that could lead us to test the speed of the crawler. I see that i can only crawl 2 pages per second (which i have disabled delay) and the connection/ram/cpu is too fast in my server. Also the server I am scraping .
I am asking if the scrapely library makes the crawling slower than usual? I really wonder the speed tests of scapely (the learner crawler) than normal xpath using scrapy's itself.
The text was updated successfully, but these errors were encountered: