Skip to content

halfdata/greencrawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Green Crawler

This is a template for simple web crawler written in Python. You just provide starting URL and it crawls through all links and sublinks (with any nested level) found on pages. As a developer you can extend its functionality. Several ideas:

  • Download images found on web pages.
  • Extract email addresses.
  • Count specific keywords.
  • etc.

Folder examples/ contains some code which demonstrates how to use Green Crawler.

More detailed documentation is coming soon...

About

Simple web crawler.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages