Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmarks? #45

Closed
mezuqu opened this issue May 18, 2014 · 4 comments
Closed

benchmarks? #45

mezuqu opened this issue May 18, 2014 · 4 comments

Comments

@mezuqu
Copy link

mezuqu commented May 18, 2014

Hi

Is there a benchmark that could lead us to test the speed of the crawler. I see that i can only crawl 2 pages per second (which i have disabled delay) and the connection/ram/cpu is too fast in my server. Also the server I am scraping .

I am asking if the scrapely library makes the crawling slower than usual? I really wonder the speed tests of scapely (the learner crawler) than normal xpath using scrapy's itself.

@kalessin
Copy link
Member

No, it doesn't. You can check that with the CPU usage in your machine while you crawl.

Crawling speed depends on many factors, not only your machine capabilities. It depends also on the settings you are using for the job, and on the target server.

@mezuqu
Copy link
Author

mezuqu commented May 20, 2014

i would rephrase my question, what difference would i get if i do the same crawling with pure scrapy and scrapy using portia

@kalessin
Copy link
Member

There can be of course a small CPU usage cost, but that is not the usual case (unless the target site has very very big pages, and even on that case probably the cost is not important). As I said, you can check CPU usage by yourself. Also, we crawl at much higher speeds than you get, using scrapely/slybot (portia).

Again, crawling speed depends on many other factors, which does not came from a difference between pure scrapy and portia. I would start by checking settings/middleware differences between your runs with scrapy and with portia.

@tpeng tpeng removed the wontfix label Jul 9, 2014
@ruairif
Copy link
Contributor

ruairif commented Jun 21, 2016

There are some benchmarks in slybot that compare parsel extraction speed to scrapely extraction speed:

test_extraction_speed.TestExtractionSpeed.test_slybot_parse_and_extract: 20.4085s
test_extraction_speed.TestExtractionSpeed.test_parsel_parse_and_extract: 16.3027s
test_extraction_speed.TestExtractionSpeed.test_parsel_extract: 12.9132s
test_extraction_speed.TestExtractionSpeed.test_slybot_extract: 10.6147s

These tests are extracting from 300 pages extracting 10-15 items from each page. As you can see the part that slows down scrapely compared to parsel is the parsing which is written in python compared to lxml which is written in C. There was an attempt made to convert the scrapely parser from python to cython but unfortunately this didn't work on all pages so it was disabled. With the cython parser scrapely was faster than parsel so we need to look at fixing the bugs in cython parser and re-enabling it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants