Skip to content

tsong/reddit-scraper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

===============================================================================
                               REDDIT SCRAPER
===============================================================================

==================
Introduction
==================

Reddit (www.reddit.com) is a popular social, link aggregation website that is 
visited by millions of users everyday. Each submission is ranked by users
using "upvotes" and "downvotes". A score is then given to the submission that 
is the difference between upvotes and downvotes.

Reddit does not have an easily accessible database of previous submissions, so
it would be very useful to scrape the website and build a local database of
previous submissions that we can query directly.


==================
Overview
==================

Although we can search Reddit by keyword and post count, each page in the
results contains an 'after id' to the next page, so we can not jump to an
arbitrary page. For later convenience, we will traverse the pages and store
all submissions in a local database. 

Reddit provides a conveneint json interface to all pages by simply appending
'.json' to the URL. We can use this interface to easily parse the submissions
data.

We will only store the basic information related to a submission (upvotes,
downvotes, poster, time, link url, etc.). We do not need to store the comments
of each submission.

==================
Interface
==================

redditscraper module:

searchscrape(words,[scraper,[pagelimit,[scorelimit]]])
	Performs a Reddit search on each entry in words, and scrapes the results
	up to a pagelimit (default: -1) and scorelimit (default: -1). If scraper
	is not given as an parameter, a default RedditScraper object will
	be used. The words parameter should be an iterable object of strings.
	

class RedditScrapper(dbname)
	Initializes the scraper with database named 'dbname'. 
	
	scrape(url,[pagelimit[,scorelimit,[delay]]])
		Scrapes a Reddit url and writes all submissions on the page
		to the database. The scraper traverses pages until it reaches
		the last page or until the pagelimit or scorelimit is reached.
		
		If pagelimit (default: -1) or scorelimit (default: -1) is negative,
		then that limit will not be taken into account when scraping.
	
		A delay (default: 1) in seconds is used to avoid overloading the
		Reddit servers with requests.

redditdb module:

class RedditDatabase(dbname)
	Creates or opens an existing sqlite database named 'dbname'.
	
	writesubmission(submission)
		Writes a submission (dictionary of each column value) to the database.
		If a submission already in the database, it will not be duplicated.


==================
Notes
==================
Traversing from '/top/?t=all', we can only read 404 pages. It appears that Reddit has a limit to the maximum number of traversable pages.

Last page URL: http://www.reddit.com/top/?t=all&after=t3_d4lfv

About

A scraper for www.reddit.com written in Python

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages