Switch branches/tags
Nothing to show
Clone or download
kremso Merge pull request #4 from andrei-rizoiu/master
Processing older DMOZ dumps, various enhancements.
Latest commit e3de0a9 Jun 22, 2016


Dmoz is an open directory which lists and groups web pages into categories (directories). Their data is publicly available, but provided as an RDF file - a huge, funny XML file.

Dmoz Parser

This is a really simple python implementation of the Dmoz RDF parser. It does not try to be smart and process the parsed XML for you, you have to provide a handler implementation where YOU decide what to do with the data (store it in file, database, print, etc.).

This parser makes the assumption is the last entity in each dmoz page is topic:

 <ExternalPage about="">
   <d:Title>Animation World Network</d:Title>
   <d:Description>Provides information resources to the international animation community. Features include searchable database archives, monthly magazine, web animation guide, the Animation Village, discussion forums and other useful resources.</d:Description>

This assumption is strictly checked, and processing will abort if it is violated.

The RDF file needs to be downloaded, but can stay packed. You can download the RDF from Dmoz site.

The RDF is pretty large, over 2G unpacked and parsing it takes some time, so there is a progress indicator.


This parser does not check for links between topics in the hierarchy, or any sophisticated parsing of the hierarchy.

The same URL might appear in multiple locations in the hierarchy.


You need to install dependencies from the requirements.txt file, for example by pip install -r requirements.txt


Instantiate the parser, provide the handler and run.

#!/usr/bin/env python

from parser import DmozParser
from handlers import JSONWriter

parser = DmozParser()

JSONWriter is the builtin handler which outputs the pages, one JSON object per line. (Note: This is different than saying that the entire file is a large JSON list.)

Terminal Usage

python <content.rdf.u8 file path> <output file path> example: python ./data/content.rdf.u8 ./data/parsed.json

Built-in handlers

There are two builtin handlers so far - JSONWriter and CSVWriter. CSVWriter is buggy (see "" to understand why), and we recommend the JSONWriter.


A handler must implement two methods:

def page(self, page, content)

this method will be called every time a new page is extracted from the RDF, argument page will contain the URL of the page and content will contain a dictionary of page content.

def finish(self)

The finish method will be called after the parsing is done. You may want to clean up here, close the files, etc.