Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Any::Moose wrapper for queued downloads via Net::Curl & AnyEvent
Perl

Fetching latest commit…

Cannot retrieve the latest commit at this time

Failed to load latest commit information.
bin
eg
lib
t
.gitignore
MANIFEST.SKIP
README.pod
dist.ini
perlcritic.rc
weaver.ini

README.pod

NAME

AnyEvent::Net::Curl::Queued - Any::Moose wrapper for queued downloads via Net::Curl & AnyEvent

VERSION

version 0.038

SYNOPSIS

    #!/usr/bin/env perl

    package CrawlApache;
    use feature qw(say);
    use strict;
    use utf8;
    use warnings qw(all);

    use HTML::LinkExtor;
    use Any::Moose;

    extends 'AnyEvent::Net::Curl::Queued::Easy';

    after finish => sub {
        my ($self, $result) = @_;

        say $result . "\t" . $self->final_url;

        if (
            not $self->has_error
            and $self->getinfo('content_type') =~ m{^text/html}
        ) {
            my @links;

            HTML::LinkExtor->new(sub {
                my ($tag, %links) = @_;
                push @links,
                    grep { $_->scheme eq 'http' and $_->host eq 'localhost' }
                    values %links;
            }, $self->final_url)->parse(${$self->data});

            for my $link (@links) {
                $self->queue->prepend(sub {
                    CrawlApache->new($link);
                });
            }
        }
    };

    no Any::Moose;
    __PACKAGE__->meta->make_immutable;

    1;

    package main;
    use strict;
    use utf8;
    use warnings qw(all);

    use AnyEvent::Net::Curl::Queued;

    my $q = AnyEvent::Net::Curl::Queued->new;
    $q->append(sub {
        CrawlApache->new('http://localhost/manual/')
    });
    $q->wait;

DESCRIPTION

AnyEvent::Net::Curl::Queued (a.k.a. YADA, Yet Another Download Accelerator) is an efficient and flexible batch downloader with a straight-forward interface capable of:

  • create a queue;
  • append/prepend URLs;
  • wait for downloads to end (retry on errors).

Download init/finish/error handling is defined through Moose's method modifiers.

MOTIVATION

I am very unhappy with the performance of LWP. It's almost perfect for properly handling HTTP headers, cookies & stuff, but it comes at the cost of speed. While this doesn't matter when you make single downloads, batch downloading becomes a real pain.

When I download large batch of documents, I don't care about cookies or headers, only content and proper redirection matters. And, as it is clearly an I/O bottleneck operation, I want to make as many parallel requests as possible.

So, this is what CPAN offers to fulfill my needs:

AnyEvent::Net::Curl::Queued is a glue module to wrap it all together. It offers no callbacks and (almost) no default handlers. It's up to you to extend the base class AnyEvent::Net::Curl::Queued::Easy so it will actually download something and store it somewhere.

ALTERNATIVES

As there's more than one way to do it, I'll list the alternatives which can be used to implement batch downloads:

BENCHMARK

(see also: CPAN modules for making HTTP requests)

Obviously, every download agent is (or, ideally, should be) I/O bound. However, it is not uncommon for large concurrent batch downloads to hog the processor cycles before consuming the full network bandwidth. The proposed benchmark measures the request rate of several concurrent download agents, trying hard to make all of them CPU bound (by removing the I/O constraint). On practice, this benchmark results mean that download agents with lower request rate are less appropriate for parallelized batch downloads. On the other hand, download agents with higher request rate are more likely to reach the full capacity of a network link while still leaving spare resources for data parsing/filtering.

The script eg/benchmark.pl compares AnyEvent::Net::Curl::Queued against several other download agents. Only AnyEvent::Net::Curl::Queued itself, AnyEvent::Curl::Multi, Parallel::Downloader, Mojo::UserAgent and lftp support concurrent downloads natively; thus, Parallel::ForkManager is used to reproduce the same behaviour for the remaining agents.

The download target is a copy of the Apache documentation on a local Apache server. The test platform configuration:

  • Intel® Core™ i7-2600 CPU @ 3.40GHz with 8 GB RAM;
  • Ubuntu 11.10 (64-bit);
  • Perl v5.16.2 (installed via perlbrew);
  • libcurl/7.28.0 (without AsynchDNS, which slows down curl_easy_init()).

The script eg/benchmark.pl uses Benchmark::Forking and Class::Load to keep UA modules isolated and loaded only once.

    $ perl benchmark.pl --count 100 --parallel 8 --repeat 10

                             Request rate WWW::M LWP::UA Mojo::UA HTTP::Lite HTTP::Tiny AE::C::M lftp P::D YADA Furl curl wget LWP::Curl
    WWW::Mechanize v1.72            231/s     --    -59%     -85%       -87%       -89%     -90% -93% -93% -94% -97% -98% -98%      -98%
    LWP::UserAgent v6.04            567/s   145%      --     -64%       -68%       -72%     -77% -82% -83% -85% -92% -94% -95%      -96%
    Mojo::UserAgent v3.54          1590/s   589%    181%       --       -10%       -22%     -34% -49% -53% -59% -76% -83% -87%      -88%
    HTTP::Lite v2.4                1770/s   666%    213%      11%         --       -13%     -27% -44% -48% -54% -74% -81% -85%      -86%
    HTTP::Tiny v0.024              2030/s   779%    259%      28%        15%         --     -16% -36% -40% -48% -70% -78% -83%      -84%
    AnyEvent::Curl::Multi v1.1     2430/s   952%    329%      53%        37%        20%       -- -23% -29% -37% -64% -74% -80%      -81%
    lftp v4.3.1                    3150/s  1262%    456%      98%        78%        55%      30%   --  -8% -19% -53% -66% -74%      -75%
    Parallel::Downloader v0.121560 3410/s  1375%    502%     114%        92%        68%      40%   8%   -- -12% -49% -64% -72%      -73%
    YADA v0.036                    3880/s  1579%    585%     144%       119%        91%      60%  23%  14%   -- -42% -59% -68%      -70%
    Furl v1.00                     6700/s  2795%   1082%     320%       278%       229%     175% 113%  96%  72%   -- -29% -45%      -48%
    curl v7.28.0                   9380/s  3953%   1554%     488%       429%       361%     285% 197% 175% 141%  40%   -- -23%      -27%
    wget v1.12                    12100/s  5139%   2038%     661%       584%       496%     398% 285% 255% 212%  81%  29%   --       -5%
    LWP::Curl v0.12               12800/s  5418%   2152%     701%       620%       528%     425% 305% 274% 229%  91%  36%   5%        --

    (output formatted to show module versions at row labels and keep column labels abbreviated)

ATTRIBUTES

allow_dups

Allow duplicate requests (default: false). By default, requests to the same URL (more precisely, requests with the same signature are issued only once. To seed POST parameters, you must extend the AnyEvent::Net::Curl::Queued::Easy class. Setting allow_dups to true value disables request checks.

common_opts

"opts" in AnyEvent::Net::Curl::Queued::Easy attribute common to all workers initialized under the same queue. You may define User-Agent string here.

completed

Count completed requests.

cv

AnyEvent condition variable. Initialized automatically, unless you specify your own. Also reset automatically after "wait", so keep your own reference if you really need it!

max

Maximum number of parallel connections (default: 4; minimum value: 1).

multi

Net::Curl::Multi instance.

queue

ArrayRef to the queue. Has the following helper methods:

queue_push

Append item at the end of the queue.

queue_unshift

Prepend item at the top of the queue.

dequeue

Shift item from the top of the queue.

count

Number of items in queue.

share

Net::Curl::Share instance.

stats

AnyEvent::Net::Curl::Queued::Stats instance.

timeout

Timeout (default: 60 seconds).

unique

Signature cache.

watchdog

The last resort against the non-deterministic chaos of evil lurking sockets.

METHODS

start()

Populate empty request slots with workers from the queue.

empty()

Check if there are active requests or requests in queue.

add($worker)

Activate a worker.

append($worker)

Put the worker (instance of AnyEvent::Net::Curl::Queued::Easy) at the end of the queue. For lazy initialization, wrap the worker in a sub { ... }, the same way you do with the Moose default => sub { ... }:

    $queue->append(sub {
        AnyEvent::Net::Curl::Queued::Easy->new({ initial_url => 'http://.../' })
    });

prepend($worker)

Put the worker (instance of AnyEvent::Net::Curl::Queued::Easy) at the beginning of the queue. For lazy initialization, wrap the worker in a sub { ... }, the same way you do with the Moose default => sub { ... }:

    $queue->prepend(sub {
        AnyEvent::Net::Curl::Queued::Easy->new({ initial_url => 'http://.../' })
    });

wait()

Process queue.

CAVEAT

  • Many sources suggest to compile libcurl with c-ares support. This only improves performance if you are supposed to do many DNS resolutions (e.g. access many hosts). If you are fetching many documents from a single server, c-ares initialization will actually slow down the whole process!

SEE ALSO

AUTHOR

Stanislaw Pusep <stas@sysd.org>

COPYRIGHT AND LICENSE

This software is copyright (c) 2013 by Stanislaw Pusep.

This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.

Something went wrong with that request. Please try again.