Skip to content
Web scraping Reddit without using Reddit API, and making a dataset, and using the dataset for a machine learning project.
Python Jupyter Notebook
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
core Added another progressbar and updated readme Nov 27, 2019
data Updated datasets and new datasets Nov 28, 2019
.gitignore
LICENSE Initial commit Nov 3, 2019
README.md Update README.md Dec 29, 2019
Reddit-Sentiment-Analysis.ipynb Converted py file to notebook Nov 29, 2019
make_dataset.py Updated datasets and new datasets Nov 28, 2019
requirements.txt Create requirements.txt Nov 30, 2019
scraper.py Fixed bugs and added requirements.txt Nov 27, 2019

README.md

Web Scraping Reddit

Web scraping /r/MachineLearning with BeautifulSoup and Selenium, without using the Reddit API, since you mostly web scrape when an API is not available -- or just when it's easier.

If you found this repository useful, consider giving it a star, such that you easily can find it again.

Features

  • App can scrape most of the available data, as can be seen from the database diagram.
  • Choose subreddit and filter
  • Control approximately how many posts to collect
  • Headless browser. Run this app in the background and do other work in the mean time.

Database Diagram

Future improvements

This app is not robust (enough). There are extremely many edge cases in web scraping, and this would be something to improve upon in the future.

Pull requests are welcome.

  • Get rid of as many varchars in comment table as possible; use int for higher quality data.
  • Make app robust
  • Download driver automatically
  • Add more browsers for running this project (Firefox first)
  • Give categories and flairs their own table, right now they are just concatenated into a string.

Install

To run this project, you need to have the following (besides Python 3+):

  1. Chrome browser installed on your computer.
  2. Paste this into your address bar in Chrome chrome://settings/help. Download the corresponding chromedriver version here.
  3. Place the chromerdriver in the core folder of this project.
  4. Install the packages from requirements.txt file by pip install -r requirements.txt

After these steps, you can run scraper.py to scrape and store the reddit data in an sqlite database. It's recommended to download DB Browser For SQLite to browse the database. If you want to pull out the data, you should use make_dataset.py. For a last demonstration, we can run a Machine Learning project using mlproj.py.

Note: If your Chrome browser automatically updates to a new version, the chromedriver that you downloaded will almost surely not work, after an automatic update.

You can’t perform that action at this time.