A simple crawler for Wikipedia pages. Given a Wikipedia page title it finds every internal link and keeps crawling until reaching the given depth. If no depth is given it will crawl until stopped. When the crawler stops it creates a .txt for every page crawled with a list of every link found in that page.
MIT licensed. See the LICENSE file for full details.
I want to thank
- goldsmith for making the Wikipedia module
- the Wikimedia Foundation for making this possible