A crawler for scraping shit from medium blogs
python3 is required along with pip3 (mine is pip by default).
First clone this repo:
git clone https://github.com/NISH1001/medium-crawler
pip install -r requirements.txt
Since the crawler uses selenium driver (more specifically, a headless driver), be sure to have firefox or chrome installed
in your system. You can always change it in the code if you feel like.
The web drivers should be in the system path like:
Firefox requires geckodriver. Download the executable to geckodriver and put it in
the system path.
Similar process might apply for Chrome or Chromium.
If nothing works out, you can put the path in the constructor here
(Kind of manual override :D) Like:
self.driver = webdriver.Chrome(path_to_web_driver)
The crawler requires username, dump type and the directory where data is to be dumped.
python mcrawler.py -h
usage: mcrawler [-h] -u USER -t TYPE -dd DUMP_DIR Crawl shit from medium optional arguments: -h, --help show this help message and exit -u USER, --user USER The username for medium -t TYPE, --type TYPE The format for dumping -> text, json -dd DUMP_DIR, --dump-dir DUMP_DIR The directory where the data is to be dumped
python mcrawler.py -u nishparadox -t text -dd data/