Skip to content

michaelradu/web-crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Web Crawler

MIT Licence

Requirements: Python 3+

Usage

Set up the url of the website you wish to crawl and then execute the script.

Initialization

$ python main.py

Packages

| requests | Requests allows you to send HTTP/1.1 requests extremely easily. |
| BeautifulSoup | Beautiful Soup creates a parse tree for parsed pages that can be used to extract data from HTML.|

Please note, this repo is for educational purposes only. No contributors, major or minor, are to fault for any actions done by this program.


🔓 License

MIT © Michael Radu
Don't really understand licenses or tl;dr? Check out the MIT license summary.