You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 13, 2021. It is now read-only.
I'm trying to run the crawler to extract links from a simple page using the following command:
11.php
is part of wivet, a web crawler test application. The response generated when browsing to11.php
is:I see this in the browser I'm running in
127.0.0.1:8080
(see proxy param in the phantomjs call). I also see the jquery.js page being requested.The output seen in stdout when running the command is:
The first one is the response for the initial GET request. The second one seems to be a click on one of the links:
My questions are:
11_2d3ff.php
in my proxy?If the crawler did not click on the second link, how was that URL extracted?
Is there something I'm doing wrong? I'm using phantomjs 2.1.1
The text was updated successfully, but these errors were encountered: