Skip to content

Google Crawler is an experimental project for automatized scraping of google results for a given word or phrase. It stores the raw HTML of every page of the results returned by google, while it gives the user the opportunity to filter this raw HTML through regular expressions and get the actual results he/she had been searching for. Although the…

Notifications You must be signed in to change notification settings

johnmans/googleCrawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

googleCrawler

Google Crawler is an experimental project for automatized scraping of google results for a given word or phrase. It stores the raw HTML of every page of the results returned by google, while it gives the user the opportunity to filter this raw HTML through regular expressions and get the actual results he/she had been searching for. Although the concept is a little bit abstract at the moment, the application can successfully be used in order to retrieve (for example) random or specific kinds of email addresses from the internet (for email marketing purposes)

Features

Error proof result scraping (user has to complete CAPTCHA if shown)

The user can add any regular expression filters of his/her liking

The user can determine how recent are the results he/she is searching for

User's results are saved as plain text (in a TXT file)

The application integrates a standalone Selenium server

About

Google Crawler is an experimental project for automatized scraping of google results for a given word or phrase. It stores the raw HTML of every page of the results returned by google, while it gives the user the opportunity to filter this raw HTML through regular expressions and get the actual results he/she had been searching for. Although the…

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages