Skip to content

catchpoint/WebPageTest.crawler

Repository files navigation

WebPageTest Logo

Learn about more WebPageTest API Integrations in our docs

WebPageTest Crawler

The WebPageTest Crawler, crawls through the website to fetch URLs and then runs test on them. Level and URL limit can be given.

image

Requires node, npm.

1. Installing Packages

Once you have cloned the project run npm install to install dependencies.

npm install

2. Updating config values

There are 3 main config values : -

  1. wpt_api_key - WebPageTest API Key. Get yours here
  2. level - integer value, specifies maximum depth the crawler should crawl.
  3. url_limit - integer value, specifies maximum limit of URLs need to tested. Note : - Crawling stops if either of them reaches a limit.

3. Adding a initial URLs txt file

You can add your initial set of URLs to the startingUrls.txt file by seperating them using a coma.

image

4. Lets fire it up

Start the node-server by running npm start

npm start

Booyah, once the crawl-testing is complete you'll have a report.csv file which includes performance details of the URLs crawled.

Running as a npm module

You can the project as npm module as well.

1. Install as npm module

npm i https://github.com/abdulsuhail/wpt-crawler.git

2. Lets fire it up

npx wpt-crawler -k "wpt_api_key" -f "./startingUrls.txt" 

3. To lookup more options

npx wpt-crawler -h

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published