Skip to content

A batch processing engine to run web crawlers in parallel.

Notifications You must be signed in to change notification settings

metamemelord/Crawly

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Crawly

A batch processing engine to run web crawlers in parallel.

DISCLAIMER: Crawly is NOT a crawler, it is only a container that can load crawling works and save the data in JSON files.

About

A batch processing engine to run web crawlers in parallel.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages