Skip to content

silvanocerza/Simple-Wikipedia-Crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

Simple Wikipedia Crawler

A simple crawler for Wikipedia pages. Given a Wikipedia page title it finds every internal link and keeps crawling until reaching the given depth. If no depth is given it will crawl until stopped. When the crawler stops it creates a .txt for every page crawled with a list of every link found in that page.

License

MIT licensed. See the LICENSE file for full details.

Credits

I want to thank

About

A simple crawler for Wikipedia pages

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages