Simple crawler for Single Page Applications.
This will be done recursively until all found links in the start URL is saved.
Do not hesitate to open a feature request or a bug report.
Docs ## Requirements
You'll need PhantomJS installed (v 1.9 or greater) in order to use this package.## Installing
npm install -g snapshooter
Usage: snapshooter [options] [params] Options: -i, --input Input url to index -o, --output Output folder to save indexed files -e, --exclude Regex pattern for excluding files (pass between quotes) -p, --pretty Output indexed files in a pretty fashion way -s, --server Start a server for previewing indexed content -P, --port Preview server port [default: 8080] -f, --forward Avoid indexing links up to the initial url folder -t, --timeout Time limit (in seconds) to wait for a page to render [default: 15] -m, --max-connections Max connections limit, use with care [default: 10] -l, --log Show 'console.log' messages (try disabling it if phantom crashes) -L, --live Creates a "live" tunnel, crawling will happen on demand -O, --once Avoid recursivity, index only the given url and nothing else -S, --stdout Prints indexed content to stdout (auto-set -O=true -l=false) -V, --verbose Shows info logs about files skipped -D, --delete Automatically delete destination folder before writing new files -X, --overwrite Automatically overwrite destination folder with new files -H, --hidden Doesn't inject the `window.snapshooter=true` on pages being indexed -v, --version Shows snapshooter version -h, --help Shows this help screen Examples: snapshooter -i <site.com> -o <local-folder> snapshooter -i <site.com> -o <local-folder> -p snapshooter -i <site.com> -o <local-folder> -ps [-P 3000] [-e '/\.exe$/m'] [-t 20000]
Considering you have a Single Page Application I bet you have also some
render method, and possibly another
out too for handling
Well, the only matter here is to inform Snapshooter that the page has finish
rendering. It's achieved by setting the property
window.crawler.is_rendered = true
Snapshooter will keep waiting for the page until this variable gets
then the rendered DOM will be saved as a plain html file.
We got ourselves many times having to "clean the DOM" before saving to file or just wanting to save the content without header and footer.
In order to achieve that we created the option to write a coffee file and filter the source just before it gets written to the HTML file.
In order to achieve filtering you must create a coffee file with a before_save method and specify that file with the "-k" option on the command line.
The method will be called and will receive a jQuery object which you can use to manipulate the DOM, you should return the piece of HTML you want to save.
# removing script tags from the DOM exports.before_save = ( $ ) -> $.find( 'script' ).remove() return $.html() # returning just the content div exports.before_save = ( $ ) -> return $.find( '#content' ).html()
then from the command line
snapshooter -i <site.com> -o <local-folder> -k my_hook_file.coffee
Download the repo and have fun, pull requests are more than welcome.
git clone git://github.com/serpentem/snapshooter.git cd snapshooter npm link
To build, just run:
During develop you may prefer:
Do not mess with version number.