Skip to content

pderyuga/wombat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Wombat 🦫

A command-line tool for scraping Reddit subreddit information.

Prerequisites

  • Python 3.13+ (or adjust to your version)
  • uv package manager

Setup

1. Clone the repository

git clone <your-repo-url>
cd wombat

2. Install dependencies

uv sync

3. Get Reddit API credentials

Note: Reddit's old self-service API tool (reddit.com/prefs/apps) was deprecated in 2023. New users must apply through Reddit's formal approval process. This project was created using credentials from the legacy system.

If you have existing credentials:

  • You can continue using your CLIENT_ID, CLIENT_SECRET, and USER_AGENT

If you need new credentials:

Once you have credentials, you'll use them in the .env file in the next step.

4. Configure environment variables

# Copy the template
cp .env.template .env

# Edit .env with your credentials
# Use your favorite editor (nano, vim, VSCode, etc.)
nano .env

Usage

Scrape a Subreddit

Scrape and display information about a subreddit:

uv run main.py scrape <subreddit_name>

Examples:

# Scrape the Python subreddit
uv run main.py scrape python

# Scrape the ExperiencedDevs subreddit
uv run main.py scrape ExperiencedDevs

# Scrape the learnprogramming subreddit
uv run main.py scrape learnprogramming

Output:

Display Name: python
Title: Python
Description: News about the programming language Python...

About

Distills popular Reddit threads into concise, LinkedIn-ready posts

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages