Skip to content

πŸ“š A shell script for searching Wikipedia index files and extracting single page content straight from the related compressed Wikipedia XML dumps.

License

Notifications You must be signed in to change notification settings

hoereth/offline-wiki-reader

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ“š Offline Wiki Reader

A shell script for searching Wikipedia index files and extracting single page content straight from the related compressed Wikipedia XML dumps. The key feature is that it operates on the compressed content within seconds (as opposed to uncompressing dozens of gigabytes beforehand). Wikipedia offers "byte index" files along every XML dump to make this possible.

This is a very low level approach, but it definitely enables you to occasionally lookup articles in your own local encyclopedia.

Getting started

Make sure you have all required commands installed before running the script:

  • dd - for extracting a binary file segment
  • bzip2recover - for recovering files
  • bunzip2 - for decompressing
  • cat - for concatenating files
  • xmllint - for extracting xml content
  • pandoc - for converting text files

Download a Wikipedia multistream dump + its index file in any number of languages. One pair of files would need to look like this:

  • [lang]wiki-latest-pages-articles-multistream-index.txt (decompressed)
  • [lang]wiki-latest-pages-articles-multistream.xml.bz2 (compressed)

Sources:

You can store those files anywhere on your local system, you just need to set an environment variable WIKI_DOWNLOADS to the containing folder, before you run the offlineWikiReader.sh script.

About

πŸ“š A shell script for searching Wikipedia index files and extracting single page content straight from the related compressed Wikipedia XML dumps.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages