Skip to content
Switch branches/tags
This branch is up to date with master.

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


var Xray = require('x-ray');
var x = Xray();

x('', '', [{
  title: '.dribbble-img strong',
  image: '.dribbble-img [data-src]@data-src',


npm install x-ray


  • Flexible schema: Supports strings, arrays, arrays of objects, and nested object structures. The schema is not tied to the structure of the page you're scraping, allowing you to pull the data in the structure of your choosing.

  • Composable: The API is entirely composable, giving you great flexibility in how you scrape each page.

  • Pagination support: Paginate through websites, scraping each page. X-ray also supports a request delay and a pagination limit. Scraped pages can be streamed to a file, so if there's an error on one page, you won't lose what you've already scraped.

  • Crawler support: Start on one page and move to the next easily. The flow is predictable, following a breadth-first crawl through each of the pages.

  • Responsible: X-ray has support for concurrency, throttles, delays, timeouts and limits to help you scrape any page responsibly.

  • Pluggable drivers: Swap in different scrapers depending on your needs. Currently supports HTTP and PhantomJS driver drivers. In the future, I'd like to see a Tor driver for requesting pages through the Tor network.

Selector API

xray(url, selector)(fn)

Scrape the url for the following selector, returning an object in the callback fn. The selector takes an enhanced jQuery-like string that is also able to select on attributes. The syntax for selecting on attributes is selector@attribute. If you do not supply an attribute, the default is selecting the innerText.

Here are a few examples:

  • Scrape a single tag
xray('', 'title')(function(err, title) {
  console.log(title) // Google
  • Scrape a single class
xray('', '.content')(fn)
  • Scrape an attribute
xray('', 'img.logo@src')(fn)
  • Scrape innerHTML
xray('', 'body@html')(fn)

xray(url, scope, selector)

You can also supply a scope to each selector. In jQuery, this would look something like this: $(scope).find(selector).

xray(html, scope, selector)

Instead of a url, you can also supply raw HTML and all the same semantics apply.

var html = "<body><h2>Pear</h2></body>";
x(html, 'body', 'h2', function(err, header) {
  header // => Pear



Specify a driver to make requests through.


Stream the results to a path. If no path is provided, a readable stream is returned. This makes it easy to build APIs around x-ray. Here's an example with Express:

var app = require('express')();
var x = require('x-ray')();

app.get('/', function(req, res) {
  res.send(x('', 'title').write());


Select a url from a selector and visit that page. Available drivers include:


Limit the amount of pagination to n requests.

xray.delay(from, [to])

Delay the next request between from and to milliseconds. If only from is specified, delay exactly from milliseconds.


Set the request concurrency to n. Defaults to Infinity.

xray.throttle(n, ms)

Throttle the requests to n requests per ms milliseconds.

xray.timeout (ms)

Specify a timeout of ms milliseconds for each request.


X-ray also has support for selecting collections of tags. While x(ul', 'li') will only select the first list item in an unordered list, x(ul, ['li']) will select all of them.

Additionally, X-ray supports "collections of collections" allowing you to smartly select all list items in all lists with a command like this: x(['ul'], ['li']).


X-ray becomes more powerful when you start composing instances together. Here are a few possibilities:

Crawling to another site

var Xray = require('x-ray');
var x = Xray();

x('', {
  main: 'title',
  image: x('#gbar a@href', 'title'), // follow link to google images
})(function(err, obj) {
    main: 'Google',
    image: 'Google Images'

Scoping a selection

var Xray = require('x-ray');
var x = Xray();

x('', {
  title: 'title',
  items: x('.item', [{
    title: '.item-content h2',
    description: '.item-content section'
})(function(err, obj) {
    title: '',
    items: [
        title: 'The 100 Best Children\'s Books of All Time',
        description: 'Relive your childhood with TIME\'s list...'


In the Wild

  • Levered Returns: Uses x-ray to pull together financial data from various unstructured sources around the web.



To run the tests, run:

npm install phantomjs -g
npm install
make test

You'll need to be running node >= 0.11.0 to run the tests, since they rely on generators.




The next web scraper. See through the <html> noise.



No packages published