Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add browser rendering using headless chrome #115

Closed
untoldbyte opened this issue Jul 1, 2020 · 7 comments
Closed

Add browser rendering using headless chrome #115

untoldbyte opened this issue Jul 1, 2020 · 7 comments

Comments

@untoldbyte
Copy link

untoldbyte commented Jul 1, 2020

Hi

FeatureRequest - We can replace splash with headless chrome via puppeteer

@Ziinc
Copy link
Collaborator

Ziinc commented Aug 31, 2020

@untoldbyte what is the use case for using puppeteer to render the html? do you require page interaction?

@untoldbyte
Copy link
Author

yes

@Ziinc
Copy link
Collaborator

Ziinc commented Sep 1, 2020

is there any specific reason why chrome/puppeteer instead of selenium?

@oltarasenko
Copy link
Collaborator

@Ziinc we also did not use Selenium for real scraping back in time when I was working with scrapy. My personal vision of it:

  1. It was not stable enough (e.g. some sudden hangings)
  2. It did not allow to modify headers of the request: WebDriver lacks HTTP response header and status code methods SeleniumHQ/selenium-google-code-issue-archive#141

So my understanding that selenium is not suitable for web crawling. At least was not suitable when I was looking at it last time.

I would try to build something similar to what is described here: https://blog.scrapinghub.com/how-to-use-a-proxy-in-puppeteer. It should be similar to splash. (Was about to start with the task, but this summer was a bit chaotic for me :()

@Ziinc
Copy link
Collaborator

Ziinc commented Sep 1, 2020

While there is no doubt that puppeteer would be great for browser automation and there isn't much browser monoculture issues (since they are adding in firefox soon), I am not particularly against adding puppeteer as a Fetcher module. I am only concerned about how we would reconcile the page interaction and html parsing.

The key pain point to resolve this issue is how to obtain further html from page interactions.

For example, the data that we want are only rendered in modals which are toggled open with buttons. Using puppeteer, we can open up each modal and scrape the data in a single request using a single nodejs script. However, doesn't that render the spider's parse_item callback redundant? All logic for scraping the page would be within the nodejs script itself, which prevents us from using Elixir's libraries and ecosystem, or reusing logic etc. And we don't currently have Elixir bindings for the puppeteer api, which prevents us from interacting with the browser from the parse_item callback directly.

A few ideas I have for this problem:

  1. the nodejs scripts scrape html fragments, which is attached to the Response object and then passed to the parse_item callback for scraping.
    • this scrapes the page twice, which is kind of inefficient
  2. the browser window is kept open while parse_item is called, and we interact with the page through a exec_script/1 function, where we execute either a js fragment or a script file. This function then returns a new request with the updated body, and we can continue parsing it using elixir
    • I think this ensures that most parsing is done in elixir, and we only use nodejs for interacting with the browser, which i think is much more ideal and prevents over-use of js
  3. capturing async api requests/responses made by the js site, and parsing the responses directly, using idea 2 to interact with the page
    • this would allow scraping of the api responses directly for cleaner data.

I think idea 2 is the better option, and idea 3 could be a nice-to-have

@oltarasenko
Copy link
Collaborator

@Ziinc maybe you're right. I don't have a full picture of it yet. I would try to build a prototype and see what is required from the production usages.

From my experience of scraping, I would say: Scraping something from modal windows was an extremely rare [I would even say almost never used] use case... In the vast majority of the cases, a simple request to something like splash (which is also scriptable, and kind of allows to execute js on the client-side) was enough.

For now I would consider headless chrome as a way to overcome bans. Currently, our amazon spiders are blocked after 2000-3000 requests, so I am looking for a standard way to do it better,

@Ziinc
Copy link
Collaborator

Ziinc commented Sep 20, 2020

we could consider microsoft's playwright, which is cross browser. Might make rotating the user agent easier

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants