Skip to content
Ready-to-use scrapy wrapper targeting .json specification files as produced by OpenWebScraper
Python
Branch: master
Clone or download
Latest commit 0505815 Nov 12, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
src Update readme Nov 12, 2019
LICENSE Initial commit Nov 12, 2019
OWS-scrapy-wrapper.spec
README.md
requirements.txt Fill repository with initial sources Nov 12, 2019

README.md

OWS-scrapy-wrapper

Ready-to-use scrapy wrapper targeting .json specification files that can be created using OpenWebScraper.

Documentation

The OWS-scrapy-wrapper is standalone in the sense that it takes a run specification json file as input, to determine what urls to crawl, how to process responses, where to store retrieved data, etc. (see specification.json). Components such as the parsing of incoming http-responses and further processing of the parsed data (in pipelines or finalizers) can be specified in the run specification, in order to keep extensibility of the scrapy wrapper as high as possible.

scrapy wrapper component overview

specification.json

{
    "blacklist": [
         "http[s]://www\.example\.com/do-not-crawl-this.*"
    ],
    "finalizers": {},
    "logs": "C:\\Path\\To\\Your\\Log\\Dir",
    "name": "example crawlname",
    "output": "C:\\Path\\To\\Your\\Output\\Dir",
    "parser": "parsers.ParagraphParser",
    "parser_data": {
        "allowed_languages": [
            "de",
            "en"
        ],
        "keep_langdetect_errors": false,
        "xpaths": [
            "//p",
            "//td"
        ]
    },
    "pipelines": {
        "pipelines.Paragraph2CsvPipeline": 300
    },
    "urls": [
        "http://www.example.com/start-crawling-this",
        "http://www.example.com/start-crawling-that"
    ]
}
  • blacklist: contains a list of regular expressions, if a uri matches one of these expressions, it will not be crawled
  • finalizers: contains a dictionary describing the finalizers to be executed after a crawl has finished, key is path to finalizer class and value is dictionary of generic data influencing behaviour of the finalizer
  • logs: Specify the directory you want to collect log files
  • name: The name of the crawl.
  • output: The file path where the crawl results will be stored
  • parser: path to parser class, this handles all http-responses obtained during crawling
  • parser_data: custom data to be passed to the parser instantiation
  • pipelines: Specifies the scrapy pipelines setting, see the scrapy documentation
  • urls: contains a list of url strings, these will be the start urls, a single scrapy crawlspider is started for each given url
You can’t perform that action at this time.