Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

(More info on this blog post)


Adapted rank-es, proof of concept.

Or: can anyone easily create a news aggregator without any user around to score the news articles? And: can we add comments to this?

And: can we do it in a very, very easy way?

Keep it simple

Only two scripts are needed: one to populate the database and another one to generate static HTML files.

  • downloads the new articles from the sources and stores them into the database. It also moves expired links and scores the ones still alive.

That's it.

Installation and usage

You will need the following Python modules: feedparser, jinja2, sqlite3, urllib2, json. You can install them locally on your user directory with the --user option if you use pip: pip install --user module. I think this is much more convenient than creating a full environment. This should also work on shared hostings (I am on DreamHost and works like a charm.)

  1. mkdir db
  2. sqlite3 db/reranker.db < schema.sqlite3, or any other DB file you want.
  3. Copy to and edit the relevant variables so that they reflect the actual paths in your system. Use full paths whenever possible. If you want to use the commenting system, register a new site on Disqus and set DISQUS to the appropriate value.
  4. Add a cron entry to your system to run first the retriever and then the generator. I have mine set at every 30 minutes.

And that's basically it.

Finding whether my sources use "link" or "id" (or any other tag)

It should not be very difficult. Let's try with The Guardian.

import feedparser
test = feedparser.parse("")

And then simply inspect the output. This feed, for instance, uses both:

'guidislink': False,
'id': u'',
'link': u'',
'links': [{'href': u' [...]


Adapted rank-es, proof of concept



No releases published


No packages published