Skip to content

augeas/lxf_archive

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

lxf_archive

In issue 309 of LinuxFormat, a scraping tutorial claims to scrape their archive in only 70 lines of code. Unfortunately, it uses Requests and Beautiful Soup, fine libraries for making http requests and parsing HTML, but collectively NOT a scraping framework.

Mere parsing of HTML is neither sufficient or necessary for web-crawling these days.

This repo uses Scrapy to extract the same data in 35 lines of code.

It's not just Scrapy's use of Xpath that makes it powerful.

A function-call that makes a request never parses it, so code is flatter by design. It's easier for novices to avoid bad code.

Want to be kind to your victims? Enable Autothrottle.

Save the data to a .csv with:

scrapy crawl lxf -o lxf_archive.csv 

(Or, save in .json format, or push to AWS S3, Scrapy does these things out-of-the-box.)

Scrapy does things by default that novices don't know they need, and things that experienced data-miners are too busy to implement.

Need headless-browsers? Try Splash or Selenium.

Need tests? Need failures to alert Slack? Try Spidermon.

Want to easily deploy your spiders? Try Scrapyd.

Need to monitor your spiders in real time? Try Scrapydweb.

You want rotating proxies or TOR? There's a Downloader Middleware for that.

It goes on...

Use the Right Tool for the Right Job.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages