A crawler for scraping shit from medium blogs
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore
LICENSE
README.md
async_test.py
link_fetcher.py
mcrawler.py
postutils.py
requirements.txt

README.md

medium-crawler

A crawler for scraping shit from medium blogs

Dependencies

python3 is required along with pip3 (mine is pip by default).

First clone this repo:

git clone https://github.com/NISH1001/medium-crawler

Install using requirements.txt:

pip install -r requirements.txt

Webdriver

Since the crawler uses selenium driver (more specifically, a headless driver), be sure to have firefox or chrome installed in your system. You can always change it in the code if you feel like.
The web drivers should be in the system path like: /usr/bin/ or /usr/local/bin/

Firefox requires geckodriver. Download the executable to geckodriver and put it in the system path.
Similar process might apply for Chrome or Chromium.

If nothing works out, you can put the path in the constructor here
(Kind of manual override :D) Like:

self.driver = webdriver.Chrome(path_to_web_driver)

Usage

The crawler requires username, dump type and the directory where data is to be dumped.

help me

python mcrawler.py -h
usage: mcrawler [-h] -u USER -t TYPE -dd DUMP_DIR

Crawl shit from medium

optional arguments:
  -h, --help            show this help message and exit
  -u USER, --user USER  The username for medium
  -t TYPE, --type TYPE  The format for dumping -> text, json
  -dd DUMP_DIR, --dump-dir DUMP_DIR
                        The directory where the data is to be dumped

example

python mcrawler.py -u nishparadox -t text -dd data/