This project is an API that will allow you to scrape any website and send the data to Meilisearch.
This server has only one endpoint.
Scrapix provides a CLI to start the crawling process.
Usage: yarn start [options]
Options:
-c, --config JSON string with the scrapix configuration
-p, --config-path Path to the scrapix configuration JSON file
-b, --browser-path Path to the browser binary
This endpoint will crawl the website and send the data to Meilisearch. data:
{
"start_urls": ["https://www.google.com"],
"urls_to_exclude": ["https://www.google.com"],
"urls_to_index": ["https://www.google.com"],
"urls_to_not_index": ["https://www.google.com"],
"meilisearch_url": "http://localhost:7700",
"meilisearch_api_key": "masterKey",
"meilisearch_index_uid": "google",
"strategy": "default", // docssearch, schema*, custom or default
"headless": true, // Use headless browser for rendering javascript websites
"batch_size": 1000, // pass null to send documents 1 at a time or specify a batch size
"primary_key": null,
"meilisearch_settings": {
"searchableAttributes": [
"h1",
"h2",
"h3",
"h4",
"h5",
"h6",
"p",
"title",
"meta.description"
],
"filterableAttributes": ["urls_tags"],
"distinctAttribute": "url"
},
"schema_settings": {
"only_type": "Product", // Product, Article, etc...
"convert_dates": true // default false
}
}
While the server receives a crawling request it will add it to the queue. When the data is added to the queue, it will return a response to the user. The queue is handled by redis (Bull). The queue will dispatch the job to the worker.
The worker will crawl only pages with the same domain names as those specified in the start_urls
config option. It will not try to scrape the external links or files. It will also not try to scrape paginated pages (like /page/1
).
For each scrappable page it will scrape the data by trying to create blocks of titles and text. Each block will contain:
- h1: The title of the block
- h2: The sub title of the block
- h3...h6: The sub sub title of the block
- p: The text of the block (will create an array of text if there is multiple p in the block)
- page_block: The block number of the page (staring at 0)
- title: The title of the page present in the head tag
- uid: a generated and incremental uid for the block
- url: The url of the page
- anchor: The anchor of the block (the lower title id of the block)
- meta: The meta of the page present in the head tag (json object containing the desciption, keywords, author, twitter, og, etc...)
- url_tags: the url pathname split by / (array of string). The last element has been removed because it's the page name.
The worker will crawl only pages with the same domain names as those specified in the start_urls
config option. It will not try to scrape the external links or files. It will also not try to scrape when pages are paginated pages (like /page/1
).
For each scrappable page it will scrape the data by trying to create blocks of titles and text. Each block will contain:
- uid: a generated and incremental uid for the block
- hierarchy_lvl0: the url pathname split by / (array of string). The last element has been removed because it's the page name.
- hierarchy_lvl1: the h1 of the block
- hierarchy_lvl2: the h2 of the block
- hierarchy_lvl3: the h3 of the block
- hierarchy_lvl4: the h4 of the block
- hierarchy_lvl5: the h5 of the block
- hierarchy_radio_lvl0: same as hierarchy_lvl0
- hierarchy_radio_lvl1: same as hierarchy_lvl1
- hierarchy_radio_lvl2: same as hierarchy_lvl2
- hierarchy_radio_lvl3: same as hierarchy_lvl3
- hierarchy_radio_lvl4: same as hierarchy_lvl4
- hierarchy_radio_lvl5: same as hierarchy_lvl5
- content: The text of the block (will create an array of text if there is multiple p in the block)
- url: The url of the page with the anchor
- anchor: The anchor of the block (the lower title id of the block)
While the worker is scraping the website it will send the data to Meilisearch by batch.
Before sending the data to Meilisearch, it will create a new index called {index_uid}_crawler_tmp
, apply the settings and add the data to it. Then it will use the index swap method to replace the old index by the new one. It will finish properly by deleting the tmp index.
The setting applied:
{
"searchableAttributes": [
"h1",
"h2",
"h3",
"h4",
"h5",
"h6",
"p",
"title",
"meta.description"
],
"filterableAttributes": ["urls_tags"],
"distinctAttribute": "url"
}
start_urls
mandatory
This array contains the list of URLs that will be used to start scraping your website. The scraper will recursively follow any links ( tags) from those pages. It will not follow links that are on another domain.
urls_to_exclude
List of the URL's to ignore
urls_to_not_index
List of the URLS that should not be indexed
meilisearch_url
mandatory
The URL to your Meilisearch instance
meilisearch_api_key
The API key to your Meilisearch instance. This key must have read and write permissions for the specified index.
meilisearch_index_uid
mandatory
Name of the index on which the content is indexed.
stategy
default: default
Scraping strategy: - default
Scrapes the content of webpages, it is suitable for most use cases. It indexes the content in this format (show example) - docssearch
Scrapes the content of webpages, it suits most use cases. The difference with the default strategy is that it indexes the content in a format compatible with docs-search bar - schema
Scraps the schema
information of your web app.
headless
default: true
Wether or not the javascript should be loaded before scraping starts.
primary_key
The key name in your documents containing their unique identifier.
meilisearch_settings
Your custom Meilisearch settings
schema_settings
If your strategy is schema
:
only_type
: Which types of schema should be parsed
convert_dates
: If dates should be converted to timestamp. This is usefull to be able to order by date.
user_agents
An array of user agents that are append at the end of the current user agents.
In this case, if your user_agents
value is ['My Thing (vx.x.x)']
the final user_agent
becomes
Meilisearch JS (vx.x.x); Meilisearch Crawler (vx.x.x); My Thing (vx.x.x)
webhook_payload
In the case that webhooks are enabled, the webhook_payload option gives the possibility to provide information that will be added in the webhook payload.
webhook_url
The URL on which the webhook calls are made.
additional_request_headers
An object containing headers to be added to every request the crawler makes.
This can be useful to add authentication headers to crawl protected sites.
E.g. authenticate crawler with basic auth:
{
"additional_request_headers": {
"Authorization": "Basic dXNlcjpwYXNzd29yZA=="
}
}
To be able to receive updates on the state of the crawler, you need to create a webhook. To do so, you must have a public URL that is reachable by the crawler. This URL will be called by the crawler to send you updates.
To enable webhooks, you need add the following env vars.
WEBHOOK_URL=https://mywebsite.com/webhook
WEBHOOK_TOKEN=mytoken
WEBHOOK_INTERVAL=1000
- The
WEBHOOK_URL
is the URL that will be called by the crawler. The calls will be made with thePOST
method. - The
WEBHOOK_TOKEN
is a token string that will be used to authenticate the request. It will be used if present in theAuthorization
header of the request in the formatAuthorization: Bearer ${token}
. - The
WEBHOOK_INTERVAL
is a way to change the frequency you want to receive updated from the scraper. The value is in milliseconds. The default value is 5000ms.
Here is the Webhook payload:
{
"date": "2022-01-01T12:34:56.000Z",
"meilisearch_url": "https://myproject.meilisearch.com",
"meilisearch_index_uid": "myindex",
"status": "active", // "added", "completed", "failed", "active", "wait", "delayed"
"nb_page_crawled": 20,
"nb_page_indexed": 15
}
It is possible to add additional information in the webhook payload through the webhook_payload
configuration
docker run --rm --env-file .env getmeili/scrapix
docker run -rm --env CRAWLER_CONFIG=$CRAWLER_CONFIG getmeili/scrapix
The CI handles the publishing.