Skip to content
a pretty-committed wikipedia markup parser
Branch: master
Clone or download
spencermountain Merge pull request #335 from tstibbs/bugfix/334-user-agent
Allowing the User-Agent header to be set
Latest commit 43335fb Jan 30, 2020


Type Name Latest commit message Commit time
Failed to load latest commit information.
api format scripts Aug 6, 2019
builds 7.8.1 Nov 28, 2019
examples format scripts Aug 6, 2019
scripts 7.7.0 Aug 12, 2019
src Allowing the User-Agent header to be set Jan 29, 2020
tests Allowing the User-Agent header to be set Jan 29, 2020
.eslintrc move infobox parser to section class May 10, 2018
.gitignore codecov yml Dec 19, 2018
.npmignore wtfyolo Dec 3, 2018
.travis.yml 7.5.0 Aug 6, 2019 add license file Oct 28, 2019 Fix typo: img.thumnail() Aug 2, 2019
_version.js 7.8.1 Nov 28, 2019 add extend method to main constructor Nov 12, 2019
codecov.yml turnoff codecov alerts plz Jun 19, 2019 Update Feb 28, 2019
package.json 7.8.1 Nov 28, 2019
scratch.js support wikivoyage geo template Nov 28, 2019

wikipedia markup parser
by Spencer Kelly and contributors

wtf_wikipedia turns wikipedia's markup language into JSON,
so getting data from wikipedia is easier.

🏠 Try to have a good time. 🛀

this is among the most-curious data formats you can find.
(then we buried our human-record in it)


wtf_wikipedia supports many recursive shenanigans, depreciated and obscure template variants, and illicit 'wiki-esque' shorthands.


It will try it's best, and fail in reasonable ways.

building your own parser is never a good idea
but this library aims to be a straight-forward way to get data out of wikipedia
... so don't be mad at me, be mad at this.

Demo   •   Tutorial   •   Api

well ok then,

npm install wtf_wikipedia

var wtf = require('wtf_wikipedia');

wtf.fetch('Whistling').then(doc => {

  //['Oral communication', 'Vocal music', 'Vocal skills']

  doc.sections('As communication').text();
  // 'A traditional whistled language named Silbo Gomero..'

  // ''

  doc.sections('See Also').links().map(link =>
  //['Slide whistle', 'Hand flute', 'Bird vocalization'...]

on the client-side:

<script src=""></script>
  //(follows redirect)
  wtf.fetch('On a Friday', 'en', function(err, doc) {
    var val = doc.infobox(0).get('current_members');
    val.links().map(link =>;
    //['Thom Yorke', 'Jonny Greenwood', 'Colin Greenwood'...]

What it does:

  • Detects and parses redirects and disambiguation pages
  • Parse infoboxes into a formatted key-value object
  • Handles recursive templates and links- like [[.. [[...]] ]]
  • Per-sentence plaintext and link resolution
  • Parse and format internal links
  • creates image thumbnail urls from File:XYZ.png filenames
  • Properly resolve {{CURRENTMONTH}} and {{CONVERT ..}} type templates
  • Parse images, headings, and categories
  • converts 'DMS-formatted' (59°12'7.7"N) geo-coordinates to lat/lng
  • parses citation metadata
  • Eliminate xml, latex, css, and table-sorting cruft

But what about...


Wikimedia's Parsoid javascript parser is the official wikiscript parser, and is pretty cool. It reliably turns wikiscript into HTML, but not valid XML.

To use it for data-mining, you'll need to:

parsoid(wikiText) -> [headless/pretend-DOM] -> screen-scraping

which is fine,

but getting structured data this way (say, sentences or infobox values), is still a complex + weird process. Arguably, you're not any closer than you were with wikitext. This library has lovingly ❤️ borrowed a lot of code and data from the parsoid project, and thanks its contributors.

Full data-dumps:

wtf_wikipedia was built to work with dumpster-dive, which lets you parse a whole wikipedia dump on a laptop in a couple hours. It's definitely the way to go, instead of fetching many pages off the api.


const wtf = require('wtf_wikipedia')
//parse a page
var doc = wtf(wikiText, [options])

//fetch & parse a page - wtf.fetch(title, [lang_or_wikiid], [options], [callback])
(async () => {
  var doc = await wtf.fetch('Toronto');

//(callback format works too)
wtf.fetch(64646, 'en', (err, doc) => {

//get a random german page
wtf.random('de').then(doc => {

Main parts:

Document            - the whole thing
  - Category
  - Coordinate

  Section           - page headings ( ==these== )
    - Infobox       - a main, key-value template
    - Table         -
    - Reference     - citations, all-forms
    - Template      - any other structured-data

    Paragraph       - content separated by two newlines
      - Image       -
      - List        - a series of bullet-points

      Sentence      - contains links, formatting, dates

For the most-part, these classes do the looping-around for you, so that Document.links() will go through every section, paragraph, and sentence, to get their links.

Broadly speaking, you can ask for the data you'd like:

  • .sections()       -   ==these things==
  • .sentences()
  • .paragraphs()
  • .links()
  • .tables()
  • .lists()
  • .images()
  • .templates()     -  {{these|things}}
  • .categories()
  • .citations()     -   <ref>these guys</ref>
  • .infoboxes()
  • .coordinates()

or output things in various formats:


  • .json()   -     handy, workable data
  • .text()   -     reader-focused plaintext
  • .html()
  • .markdown()
  • .latex()   -     (ftw)
  • .isRedirect()     -   boolean
  • .isDisambiguation()     -   boolean
  • .title()       -      guess the title of this page
  • .redirectsTo()     -   {page:'China', anchor:'#History'}



flip your wikimedia markup into a Document object

import wtf from 'wtf_wikipedia'
wtf(`==In Popular Culture==
* harry potter's wand
* the simpsons fence`);
// Document {text(), html(), lists()...}

wtf.fetch(title, [lang_or_wikiid], [options], [callback])

retrieves raw contents of a mediawiki article from the wikipedia action API.

This method supports the errback callback form, or returns a Promise if one is missing.

to call non-english wikipedia apis, add it's language-name as the second parameter

wtf.fetch('Toronto', 'de', function(err, doc) {
  //Toronto ist mit 2,6 Millionen Einwohnern..

you may also pass the wikipedia page id as parameter instead of the page title:

wtf.fetch(64646, 'de').then(console.log).catch(console.log)

the fetch method follows redirects.

the optional-callback pattern is the same for wtf.random()

wtf.random(lang, options, callback) wtf.random(lang, options).then(doc=>doc.infobox())

wtf.category(title, [lang_or_wikiid], [options], [callback])

retrieves all pages and sub-categories belonging to a given category:

let result = await wtf.category('Category:Politicians_from_Paris');
//  pages: [{title: 'Paul Bacon', pageid: 1266127 }, ...],
//  categories: [ {title: 'Category:Mayors of Paris' } ]

//this format works too
wtf.category('National Basketball Association teams', 'en', (err, result)=>{


returns only nice plain-text of the article

var wiki =
  "[[Greater_Boston|Boston]]'s [[Fenway_Park|baseball field]] has a {{convert|37|ft}} wall.<ref>{{cite web|blah}}</ref>";
var text = wtf(wiki).text();
//"Boston's baseball field has a 37ft wall."

Section traversal:

wtf(page).sections('see also').remove()

Sentence data:

s = wtf(page).sentences(4)
s.dates() //structured date templates


img = wtf(page).images(0)
img.url()     // the full-size wikimedia-hosted url
img.thumbnail() // 300px, by default
img.format()  // jpg, png, ..
img.exists()  // HEAD req to see if the file is alive


if you're scripting this from the shell, or from another language, install with a -g, and then run:

$ wtf_wikipedia George Clooney --plaintext
# George Timothy Clooney (born May 6, 1961) is an American actor ...

$ wtf_wikipedia Toronto Blue Jays --json
# {text:[...], infobox:{}, categories:[...], images:[] }

Good practice:

The wikipedia api is pretty welcoming though recommends three things, if you're going to hit it heavily -

  • pass a Api-User-Agent as something so they can use to easily throttle bad scripts
  • bundle multiple pages into one request as an array
  • run it serially, or at least, slowly.
wtf.fetch(['Royal Cinema', 'Aldous Huxley'], 'en', {
  'Api-User-Agent': ''
}).then((docList) => {
  let allLinks = => doc.links());


Join in! - projects like these are only done with many-hands, and we try to be friendly and easy. PRs always welcome.

Some Big Wins:

  1. Supporting more templates - This is actually kinda fun.
  2. Adding more tests - you won't believe how helpful this is.
  3. Make a cool thing. Holler it at spencer.

if it's a big change, make an issue and talk-it-over first.

Otherwise, go nuts!

See also:

Thank you to the cross-fetch library.


You can’t perform that action at this time.