Skip to content

spnkr/scrapr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Get HTML web pages via a cache. Don't get banned any more. Get the google cache for a url. Requires mongodb.

Install

gem 'scrapr'

Configure

Scrapr.content_expires_after = '24h'  #default
Scrapr.cache_store = :redis           #experimental
Scrapr.cache_store = :active_record   #default

Get a page

html = Scrapr.get "www.google.com"

Return a mechanize object

page = Scrapr.get "www.google.com", :mechanize
page = Scrapr.mechanize "www.google.com"

Get the google cache

html = Scrapr.get "www.apple.com", :google_cache

Search google

html = Scrapr.google_search "terms here"

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages