Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to get up and running from the quick start #14

Closed
Ziinc opened this issue Oct 1, 2019 · 13 comments
Closed

Unable to get up and running from the quick start #14

Ziinc opened this issue Oct 1, 2019 · 13 comments

Comments

@Ziinc
Copy link
Collaborator

Ziinc commented Oct 1, 2019

Hi, I am unable to scrape the erlang solutions blog as the quickstart guide states here:
https://github.com/oltarasenko/crawly#quickstart

Attempting to run the spider through iex results in:

iex(1)> Crawly.Engine.start_spider(MyCrawler.EslSpider)
[info] Starting the manager for Elixir.MyCrawler.EslSpider
[debug] Running spider init now.
[debug] Scraped ":title,:url"
[debug] Starting requests storage worker for Elixir.MyCrawler.EslSpider...
[debug] Started 2 workers for Elixir.MyCrawler.EslSpider
:ok
iex(2)> [info] Current crawl speed is: 0 items/min
[info] Stopping MyCrawler.EslSpider, itemcount timeout achieved

I'm quite lost as there is no way for me to debug this, if it is a network issue (which is highly unlikely since i can access the esl website through my browser), or if it is an issue with the urls being filtered out.

defmodule MyCrawler.EslSpider do
  @behaviour Crawly.Spider
  alias Crawly.Utils
  require Logger
  @impl Crawly.Spider
  def base_url(), do: "https://www.erlang-solutions.com"

  @impl Crawly.Spider
  def init() do
    Logger.debug("Running spider init now.")
    [start_urls: ["https://www.erlang-solutions.com/blog.html"]]
  end

  @impl Crawly.Spider
  def parse_item(response) do
    IO.inspect(response)
    hrefs = response.body |> Floki.find("a.more") |> Floki.attribute("href")

    requests =
      Utils.build_absolute_urls(hrefs, base_url())
      |> Utils.requests_from_urls()

    # Modified this to make it even more general, to eliminate the possibility of selector problem
    title = response.body |> Floki.find("title") |> Floki.text()

    %{
      :requests => requests,
      :items => [%{title: title, url: response.request_url}]
    }
  end
end

Of note is that the spider does not even call the parse_items callback, as the IO.inspect for the response is not called at all.

Config is as follows:

config :crawly,
  closespider_timeout: 10,
  concurrent_requests_per_domain: 2,
  follow_redirects: true,
  output_format: "csv",
  item: [:title, :url],
  item_id: :title
@oltarasenko
Copy link
Collaborator

Hey @Ziinc apparently your code works for me. Maybe you have some other details to share? Maybe I should try with different Elixir/Erlang versions?
2019-10-01_1724

@oltarasenko
Copy link
Collaborator

Could you also try to do Crawly.fetch("https://www.erlang-solutions.com/blog.html") just to check if it's possible to make requests to the blog from your machine?

@oltarasenko
Copy link
Collaborator

@Ziinc in any case tell me what you're trying to achieve so I could try to help.

@Ziinc
Copy link
Collaborator Author

Ziinc commented Oct 2, 2019

I'm trying to integrate Crawly with an existing phoenix project.

My dependencies are as follows (though I doubt that there would be dependency conflicts:

defp deps do
    [
      {:phoenix, "~> 1.4.0"},
      {:phoenix_pubsub, "~> 1.1"},
      {:phoenix_ecto, "~> 4.0"},
      {:ecto_sql, "~> 3.0"},
      {:postgrex, ">= 0.0.0"},
      {:phoenix_html, "~> 2.11"},
      {:phoenix_live_reload, "~> 1.2", only: :dev},
      {:gettext, "~> 0.11"},
      {:jason, "~> 1.0"},
      {:plug_cowboy, "~> 2.0"},
      {:mix_test_watch, "~> 0.8", only: :dev, runtime: false},
      {:comeonin, "~> 4.1"},
      {:bcrypt_elixir, "~> 1.1"},
      {:distillery, "~> 2.0", runtime: false},
      {:httpoison, "~> 1.4"},
      {:ex_aws, "~> 2.0"},
      {:ex_aws_s3, "~> 2.0"},
      {:sweet_xml, "~> 0.6"},
      {:bureaucrat, "~> 0.2.5"},
      {:hound, "~> 1.0", only: [:dev, :test], runtime: false},
      {:mogrify, "~> 0.7.3"},
      {:honeydew, "~> 1.4.4"},
      {:crawly, "~> 0.5.0"}
    ]
  end

I was actually initially on 1.8.2 when I first encountered the issue, and I had updated to see if it would help.

This is what happens with the Crawly.fetch()

Interactive Elixir (1.9.1) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> Crawly.fetch("https://www.erlang-solutions.com/blog.html")
{:error,
 %HTTPoison.Error{id: nil, reason: {:option, :server_only, :honor_cipher_order}}}

I will try the quickstart on a fresh project and report back

@Ziinc
Copy link
Collaborator Author

Ziinc commented Oct 2, 2019

@oltarasenko I've been able to get the quickstart to work in a new project, but it still does not work when i try add it into the existing project. I think it's an issue with dependency conflicts, notably httpoison.

@Ziinc
Copy link
Collaborator Author

Ziinc commented Oct 2, 2019

@oltarasenko Seems like it is an issue with hackney:

https://elixirforum.com/t/hackney-error-option-server-only-honor-cipher-order/25541

I'll try recompiling and updating the deps and try again

@Ziinc
Copy link
Collaborator Author

Ziinc commented Oct 2, 2019

Updating the httpoison dependency did the trick, it upgraded hackney to the latest version, where the ssl issue was fixed. Thanks!

@Ziinc Ziinc closed this as completed Oct 2, 2019
@Ziinc
Copy link
Collaborator Author

Ziinc commented Oct 2, 2019

I think instead of letting the httpoison errors get swallowed up, it would be good to let them surface as debug logs.

@oltarasenko
Copy link
Collaborator

Ok, I am re-opening it, as the thing you have mentioned (https://github.com/oltarasenko/crawly/blob/master/lib/crawly/worker.ex#L43) requires a fix. Will process an error here & log the error message. Good catch!

@oltarasenko oltarasenko reopened this Oct 2, 2019
@oltarasenko
Copy link
Collaborator

@Ziinc Could you please have a glance at #15. It is pretty trivial for now. however, I plan to extend the worker quite soon (as I am currently working on a support of different user agents (aka webdriver support))

@Ziinc
Copy link
Collaborator Author

Ziinc commented Oct 3, 2019

looks good. A possible extension (to add to backlog) for error behaviour could be a configurable fallback module to call when the backoff retries fail, thereby allowing the engine to exec some wrap up function, possibly to alert for errors.

@Ziinc
Copy link
Collaborator Author

Ziinc commented Oct 3, 2019

A possible extension (to add to backlog) for error behaviour could be a configurable fallback module to call when the backoff retries fail, thereby allowing the engine to exec some wrap up function, possibly to alert for errors.

Possibly specified at either spider level or config level

@oltarasenko
Copy link
Collaborator

This is now fixed in 0.6.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants