A Nodejs module for downloading images from a website
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
test Verifying error img.error when invalid img uri Feb 15, 2013
.gitignore MJM: CLI front-end Jan 30, 2013
.travis.yml Adding travis build file Feb 8, 2013
main.js Honouring src img path in local disk path. Verifying src img paths be… Feb 9, 2013
package.json Updating documentation to reflect npm registry installation Feb 21, 2013



A Node module for downloading images to disk from a given URL.


    npm install img-crawler

Running the tests

From the module directory run:

    npm test

Without npm:

    make test


Download imgs from 'pearljam.com' and write them to the 'pj-imgs' directory. The dir will be created if not found and resolved to an absolute path.

     var crawler = require('img-crawler');
     var opts = {
         url: 'http://pearljam.com',
         dist: 'pj-imgs'
     crawler.crawl(opts, function(err, data) {
         console.log('Downloaded %d from %s', data.imgs.length, opts.url);

The callback

Keeping inline with node convention the callback first accepts an error object followed by data representing the downloaded images. The err object will be provided if loading the web page fails. Failures are reported in the img responses.

Here's an example of a response:

        imgs: [
                src: 'img/a-img.png', 
                statusCode: 200,
                success: true,
                path: '/Users/radvieira/my-imgs/img/a-img.png'
                src: 'img/another-img.png', 
                statusCode: 404,
                success: false

In this case the first image was downloaded and written to disk while the other failed. Notice how there is no path attribute for the failed download.