Skip to content

shalomRachapudi/web-crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 

Repository files navigation

Web-crawler in Go

A (multi-threaded) web crawler that is limited to a domain. For instance, if the origin or starting point is www.amazon.com, it would crawl all pages within amazon.com, but not follow external links, for example to the Facebook or Twitter.

Getting the Source Code

To get the source code, clone the GitHub repository:

$ git clone https://github.com/shalomRachapudi/web-crawler.git

Steps to run the web-crawler

$ ./run.sh

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published