Scrape pages and email me when a new linked item is added
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
parsers Ignore undefined URL’s Aug 9, 2016
utilities Cleanup and ready to share May 28, 2016
.gitignore first commit May 28, 2016 Update Jun 14, 2016
config.example.js first commit May 28, 2016
index.js Set max-width on img Jun 14, 2016
package.json Update package.json May 28, 2016
pages.js Add one more example May 28, 2016


Lambda Scraper is an AWS Lambda function that: Scrapes any number of web pages you define, searching for new links on the page, and (optionally) filters the results by keyword. If it finds results, it sends an email to you via AWS SES.

Use it to add notification functionality to sites that don't natively have notifications. Use it to be notified of new product listings, cheap flights, job listings, or whatever you dream up.

The initial use case was as a Careers page scraper for my 👫, because it's hard out there for fashion students, and a lot of Careers pages offer no way of being notified of new postings. I'm not sure what else you might use this for, but if you come up with something good, let me know.


Experience working with AWS will be super handy if you want to set this up for yourself. If my instructions are unclear and you'd like to get this setup, ping me and I might put together a video walkthrough if you pressure me enough.

  1. Download this repo and create a config.js file (see config.example.js as an example of the initial format of this file). Go ahead and set the email_to (your email here), email_subject, and AWS API key and secret. You'll set the other values in the next steps.
  2. Create an S3 bucket, then enter the bucket name in config.js from step 1.
  3. Create and verify an SES sending email or domain, then enter the sending email address (email_from) in config.js. Set the aws_region in your config.js to the region you used for your SES email.
  4. Define the pages and their selectors in page.js (See below)
  5. Locally, run npm install
  6. Zip your project folder and upload to Lambda (See below)

HTML pages

As an example, if the HTML you're trying to scrape from a page (ie. looks like below:

  <li class="posting">
    <h1>Hello world</h1>
    <a class="more" href="">Read more</a>
    <span class="location">NYC</span>

  <li class="posting">
    <h1>Hello world</h1>
    <a class="more" href="">Read more</a>
    <span class="location">NYC</span>

In pages.js, you'd enter:

  url: "", // required
  keywords: ["sales", "marketing", "fashion", "jewlery"], // optional
  parent: ".posting", // required
  selectors: {
    title: "h1", // required
    url: ".more", // required
    location: ".location" // optional key/value (title/selector)

Quick note on images: If you enter a key of image or thumbnail, the selector must point to an img tag.

JSON pages

For JSON endpoints, use $ as the top-level object within the parent array

For example, given a JSON endpoint (ie. with a response of:

  "postings": [{
    "title": "Hello world",
    "link": "",
    "location": { "title": "NYC" }

In pages.js you'd enter:

  url: "",
  json: true, // required for JSON
  keywords: ["sales", "marketing", "fashion", "jewlery"],
  parent: "postings",
  selectors: {
    title: "$.title",
    url: "$.link",
    center: "$.location.title"


Create the zip package

You can zip the package as you normally would, or you can run the zip npm script:

$ npm install
$ npm run zip

(You can ignore the .env file this creates in your root directory)

Create the Lambda function

  1. Start with the "canary" blueprint
  2. Create a CloudWatch event and set the rate (ie. 20 2 * * ? * to run once a day at 2:20 UTC)
  3. In the final review step of creating the Lambda function, make sure to enable the event source.

Lambda settings:

  • Runtime: Node.js 4.3
  • Handler: index.handler
  • Memory: 128mb should be enough
  • Timeout: 2 minutes should be plenty (Mine hasn't gone beyond 15 seconds)


In index.js, set var debug to true.

In your termainal, you can then run the below (from the root of the project) to see what results are found for the pages you've defined:

$ node
 > require('./index').handler()
 Found results for...