Dmoz is an open directory which lists and groups web pages into categories (directories). Their data is publicly available, but provided as an RDF file - a huge, funny XML file.
This is a really simple python implementation of the Dmoz RDF parser. It does not try to be smart and process the parsed XML for you, you have to provide a handler implementation where YOU decide what to do with the data (store it in file, database, print, etc.).
This parser makes the assumption is the last entity in each dmoz page is topic:
<ExternalPage about="http://www.awn.com/"> <d:Title>Animation World Network</d:Title> <d:Description>Provides information resources to the international animation community. Features include searchable database archives, monthly magazine, web animation guide, the Animation Village, discussion forums and other useful resources.</d:Description> <priority>1</priority> <topic>Top/Arts/Animation</topic> </ExternalPage>
This assumption is strictly checked, and processing will abort if it is violated.
The RDF file needs to be downloaded, but can stay packed. You can download the RDF from Dmoz site.
The RDF is pretty large, over 2G unpacked and parsing it takes some time, so there is a progress indicator.
This parser does not check for links between topics in the hierarchy, or any sophisticated parsing of the hierarchy.
The same URL might appear in multiple locations in the hierarchy.
You need to install dependencies from the requirements.txt file, for example by
pip install -r requirements.txt
Instantiate the parser, provide the handler and run.
#!/usr/bin/env python from parser import DmozParser from handlers import JSONWriter parser = DmozParser() parser.add_handler(JSONWriter('output.json')) parser.run()
JSONWriter is the builtin handler which outputs the pages, one JSON object per line. (Note: This is different than saying that the entire file is a large JSON list.)
python parser.py <content.rdf.u8 file path> <output file path>
python parser.py ./data/content.rdf.u8 ./data/parsed.json
There are two builtin handlers so far - JSONWriter and CSVWriter. CSVWriter is buggy (see "handler.py" to understand why), and we recommend the JSONWriter.
A handler must implement two methods:
def page(self, page, content)
this method will be called every time a new page is extracted from the RDF, argument page will contain the URL of the page and content will contain a dictionary of page content.
The finish method will be called after the parsing is done. You may want to clean up here, close the files, etc.