Skip to content

plug-city/apiscraper

 
 

Repository files navigation

The API Scraper is a Python 3.x tool designed to find "hidden" API calls powering a website.

Installation

The following Python libraries should be installed (with pip, or the package manager of your choice):
  • Selenium
  • Requests
Download the latest Chrome webdriver and the BrowserMob Proxy bin. Put them into the apiscraper root directory. See line 35 in apiFinder.py to modify the directory names and locations of these binary files:

self.browser = Browser("chromedriver/chromedriver", "browsermob-proxy-2.1.4/bin/browsermob-proxy", self.harDirectory)

Usage

The script can be run from the command line using: $python3 consoleservice.py [commands]

If you get confused about which commands to use, use the -h flag. $ python3 consoleservice.py -h

usage: consoleService.py [-h] [-u [U]] [-d [D]] [-s [S]] [-c [C]] [--p]

optional arguments:

-h, --help show this help message and exit

-u [U] Target URL. If not provided, target directory will be scanned for har files.

-d [D] Target directory (default is "hars"). If URL is provided, directory will store har files. If URL is not provided, directory will be scanned.

-s [S] Search term

-c [C] Count of pages to crawl (with target URL only)

--p Flag, remove unnecessary parameters (may dramatically increase run time)

Running the API

Hey, I heard you like APIs so I'm writing an API to find you APIs so you can API while you API.

Kicking this off over HTTP is absolutely not necessary at all, however, I am including a Flask wrapper around the API Finder, so it might as well be documented!

Install Flask

export FLASK_APP=webservice.py
flask run

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.3%
  • HTML 4.7%