Skip to content
Declarative, scriptable web robot (crawler) and scrapper
Branch: develop
Clone or download
Latest commit 72fe2a5 May 19, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
bin Good job Sobak! You forgot name of your own project Mar 3, 2019
docs Add Cookbook section to the documentation May 15, 2019
src Release v0.3.0 May 19, 2019
tests Test XpathHtmlMatcher May 18, 2019
.gitattributes phpunit.xml can be ignored for dist builds Apr 14, 2019
.gitignore Rename functional tests to integration tests May 5, 2019
.travis.yml Notify Code Climate of the test coverage May 19, 2019
CHANGELOG.md
CONTRIBUTING.md Update CONTRIBUTING doc wrt. tests May 19, 2019
LICENSE.md Specify project license Mar 3, 2019
README.md Add badges to the README May 19, 2019
composer.json Rewrite CsvFileResultWriter to use league/csv May 5, 2019
phpunit.xml Increase default time for the integration tests server May 19, 2019

README.md

Scrawler

Packagist Travis build Test Coverage

Scrawler is a declarative, scriptable web robot (crawler) and scrapper which you can easily configure to parse any website and process the information into the desired format.

Configuration is based on the building blocks, for which you can provide your own implementations allowing for further customization of the process.

Install

As usual, start by installing the library with Composer:

composer require sobak/scrawler

Usage

<?php

use App\PostEntity;
use Sobak\Scrawler\Block\Matcher\CssSelectorHtmlMatcher;
use Sobak\Scrawler\Block\Matcher\CssSelectorListMatcher;
use Sobak\Scrawler\Block\ResultWriter\FilenameProvider\EntityPropertyFilenameProvider;
use Sobak\Scrawler\Block\ResultWriter\JsonFileResultWriter;
use Sobak\Scrawler\Block\UrlListProvider\ArgumentAdvancerUrlListProvider;
use Sobak\Scrawler\Configuration\Configuration;
use Sobak\Scrawler\Configuration\ObjectConfiguration;

require 'vendor/autoload.php';

$scrawler = new Configuration();

$scrawler
    ->setOperationName('Sobakowy Blog')
    ->setBaseUrl('http://sobak.pl')
    ->addUrlListProvider(new ArgumentAdvancerUrlListProvider('/page/%u', 2))
    ->addObjectDefinition('post', new CssSelectorListMatcher('article.hentry'), function (ObjectConfiguration $object) {
        $object
            ->addFieldDefinition('date', new CssSelectorHtmlMatcher('time.entry-date'))
            ->addFieldDefinition('content', new CssSelectorHtmlMatcher('div.entry-content'))
            ->addFieldDefinition('title', new CssSelectorHtmlMatcher('h1.entry-title a'))
            ->addEntityMapping(PostEntity::class)
            ->addResultWriter(PostEntity::class, new JsonFileResultWriter([
                'directory' => 'posts/',
                'filename' => new EntityPropertyFilenameProvider([
                    'property' => 'slug',
                ]),
            ]))
        ;
    })
;

return $scrawler;

After saving the configuration file (perhaps as a config.php) all you have to do is execute this command:

php vendor/bin/scrawler crawl config.php

The example shown above will fetch http://sobak.pl page, then it will iterate over all existing post pages (limited by first 404 occurence) starting from 2nd, get all posts on each page, map them to App\PostEntity objects and finally write the results down to individual JSON files using post slugs as filenames.

As you can see with this short code, almost half of it being the imports, you can easily achieve quite tedious task for which you would otherwise need to get a few libraries, define rules to follow, provide correct map to write down the file... Scrawler does it all for you!

Note: Scrawler does not aim to execute client side code, by design. This completely is doable (look at headless Chrome or even phantom.js if you like history) but I consider it out of scope for this project and have no interest in developing it. Thanks for understanding.

Documentation

For the detailed documentation please check the table of contents below.

If you are already familiar with the basic Scrawler concepts you will probably be mostly interested in the "Blocks" chapter. Block in Scrawler is an abstracted, swappable piece of logic defining the crawling, scrapping or result processing operations which you can customize using one of many builtin classes or even your own, tailored implementation. Looking at the example above, you could provide custom logic for UrlListProvider or ResultWriter (just examples for many of the available block types).

Note: I have to admit I am not a fan of excessive DocBlocks usage. That's why documentation in the code is sparse and focuses mainly on interfaces, especially ones for creating custom implementation of blocks. Use the documentation linked above and obviously read the code.

Just be polite

Before you start tinkering with a library, please remember: some people do not want their websites to be scrapped by bots. With growing percentage of bandwidth being caused by bots it might not only be considered problematic from the business standpoint but also expensive to handle all that traffic. Please respect that. Even though Scrawler provides implementations for some blocks, which might be useful to mimic the actual internet user, you should not use them to bypass anti-scrapping measures taken by some of the website owners.

Note: For the testing purposes you can freely crawl my website, excluding its subdomains. Just please leave the default user agent.

License

Scrawler is distributed under the MIT license. For the details please check the dedicated LICENSE file.

Contributing

For the details on how to contribute please check the dedicated CONTRIBUTING file.

You can’t perform that action at this time.