Skip to content

Commit

Permalink
Documentation updates
Browse files Browse the repository at this point in the history
1. Explain how to run Crawly as a standalone application
2. Explain how to create spiders as YML file
  • Loading branch information
oltarasenko committed Apr 6, 2023
1 parent c3b9797 commit 9030b0d
Show file tree
Hide file tree
Showing 7 changed files with 147 additions and 1 deletion.
Binary file added documentation/assets/create_yml_spider.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added documentation/assets/management_ui.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added documentation/assets/management_ui2.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added documentation/assets/preview_yml_spider.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
42 changes: 42 additions & 0 deletions documentation/spiders_in_yml.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Defining spiders in YML

Starting from version 0.15.0, Crawly has added the possibility to define spiders as YML files directly from Crawly’s management interface. The main idea is to reduce the amount of boilerplate when defining simple spiders.

You should not write code to get titles and descriptions from Reddit (we hope :)

## Quickstart
1. Start the Crawly application (either using the classical, dependency-based approach or as a standalone application)
2. Open Crawly Management interface (localhost:4001)
![Create YML Spider](./assets/create_yml_spider.png)
3. Define the spider using the following structure:
``` yml
name: BooksSpiderForTest
base_url: "https://books.toscrape.com/"
start_urls:
- "https://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html"
fields:
- name: title
selector: ".product_main"
- name: price
selector: ".product_main .price_color"
links_to_follow:
- selector: "a"
attribute: "href"
```

4. Click Preview button, to see how extracted data will look like after the spider is created:
![Preview YML Spider](./assets/preview_yml_spider.png)

5. Now after saving the spider it will be possible to Schedule the spider using the Crawly Management interface.

## YML Spider Structure

* "name" (required): A string representing the name of the scraper.
* "base_url" (required): A string representing the base URL of the website being scraped. The value must be a valid URI.
* "start_urls" (required): An array of strings representing the URLs to start scraping from. Each URL must be a valid URI.
* "links_to_follow" (required): An array of objects representing the links to follow when scraping a page. Each object must have the following properties:
* "selector": A string representing the CSS selector for the links to follow.
* "attribute": A string representing the attribute of the link element that contains the URL to follow.
* "fields" (required): An array of objects representing the fields to scrape from each page. Each object must have the following properties:
* "name": A string representing the name of the field.
* "selector": A string representing the CSS selector for the field to scrape.
103 changes: 103 additions & 0 deletions documentation/standalone_crawly.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# Running Crawly as a standalone application

An approach that involves abstracting all scraping tasks into a separate entity (or service), thereby allowing you to extract your desired data without the need for Crawly in your mix file.

In other words:
```
Run Crawly as a Docker container with spiders mounted from the outside.
```

# Getting started

Here we will show how re-implement the example from Quickstart, to achieve the same results with a standalone version of Crawly.

1. Make a folder for your project: `mkdir myproject`
2. Make spiders folder inside the project folder: `mkdir ./myproject/spiders`
3. Copy the spider code inside a file in a folder defined on the previous step `myproject/spiders/books_to_scrape.ex`:
``` elixir
defmodule BooksToScrape do
use Crawly.Spider
@impl Crawly.Spider
def base_url(), do: "https://books.toscrape.com/"
@impl Crawly.Spider
def init() do
[start_urls: ["https://books.toscrape.com/"]]
end
@impl Crawly.Spider
def parse_item(response) do
# Parse response body to document
{:ok, document} = Floki.parse_document(response.body)
# Create item (for pages where items exists)
items =
document
|> Floki.find(".product_pod")
|> Enum.map(fn x ->
%{
title: Floki.find(x, "h3 a") |> Floki.attribute("title") |> Floki.text(),
price: Floki.find(x, ".product_price .price_color") |> Floki.text(),
url: response.request_url
}
end)
next_requests =
document
|> Floki.find(".next a")
|> Floki.attribute("href")
|> Enum.map(fn url ->
Crawly.Utils.build_absolute_url(url, response.request.url)
|> Crawly.Utils.request_from_url()
end)
%{items: items, requests: next_requests}
end
end
```
3. Now create a configuration file using erlang.config format:
https://www.erlang.org/doc/man/config.html

For example: `myproject/crawly.config`
``` erlang
[{crawly, [
{closespider_itemcount, 500},
{closespider_timeout, 20},
{concurrent_requests_per_domain, 2},

{middlewares, [
'Elixir.Crawly.Middlewares.DomainFilter',
'Elixir.Crawly.Middlewares.UniqueRequest',
'Elixir.Crawly.Middlewares.RobotsTxt',
{'Elixir.Crawly.Middlewares.UserAgent', [
{user_agents, [<<"Crawly BOT">>]}
]}
]},

{pipelines, [
{'Elixir.Crawly.Pipelines.Validate', [{fields, [title, url]}]},
{'Elixir.Crawly.Pipelines.DuplicatesFilter', [{item_id, title}]},
{'Elixir.Crawly.Pipelines.JSONEncoder'},
{'Elixir.Crawly.Pipelines.WriteToFile', [{folder, <<"/tmp">>}, {extension, <<"jl">>}]}
]
}]
}].
```

4. Now lets start the Crawly (TODO: Insert link to crawly Docker repos):
```
docker run --name crawlyApp1 -e "SPIDERS_DIR=/app/spiders" \
-it -p 4001:4001 -v $(pwd)/spiders:/app/spiders \
-v $(pwd)/crawly.config:/app/config/crawly.config \
crawly
```

** SPIDERS_DIR environment variable specifies a folder from which additional spiders are going to be fetched. `./spiders` is used by default

5. Open Crawly Web Management interface in your browser: https://localhost:4001/

Here it's possible to Schedule a spider with a use of Schedule button. The interface also allows you to access other useful information like:
1. History of your jobs
2. Items
3. Logs of the given spider

![Crawly Management](./assets/management_ui.png)
![Crawly Management](./assets/management_ui2.png)



3 changes: 2 additions & 1 deletion mix.exs
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,8 @@ defmodule Crawly.Mixfile do

# Add floki only for crawly standalone release
{:floki, "~> 0.33.0", only: [:dev, :test, :standalone_crawly]},
{:logger_file_backend, "~> 0.0.11", only: [:test, :dev]}
{:logger_file_backend, "~> 0.0.11",
only: [:test, :dev, :standalone_crawly]}
]
end

Expand Down

0 comments on commit 9030b0d

Please sign in to comment.