Skip to content

comp-strat/web-crawling-ic2s2-2022

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Binder

Web-Crawling: Essential Tools for Wrangling Web Data in Python

Overview

When it comes to data collection, web-crawling (i.e., web-scraping, screen-scraping) is a common approach in our increasingly digital era--and a common stumbling block. With such a wide range of tools and languages available (Selenium, Requests, and HTML, to name just a few), developing and implementing a web-crawling pipeline is often a frustrating experience for researchers--especially those without a computer science background.

Whatever your background, this workshop will give you the foundation to use web-crawling in your research. We will tackle common problems including collecting web addresses/URLs (by automated Google search), downloading website copies (with wget), non-scalable website scraping (with requests), and scalable crawling of text (with scrapy). No web-crawling experience is required, but some Python know-how is expected.

Workshop goals

  • Understanding the building blocks for digital data collection via web-crawling and -scraping
  • Intuitions around the uses and limits of:
    • APIs (Application Programming Interfaces)
    • Exploiting website structure (HTML/CSS)
    • Web-crawling for research at scale
  • Knowledge of common problems in web-crawling and their fixes, like:
    • Nested websites --> vertical crawling (link extraction)
    • Getting blocked --> polite pauses between server requests
  • Hands-on skill with:
    • Collecting domains to scrape
    • Non-scalable website scraping with Requests
    • Parsing website text with BeautifulSoup
    • Crawling at scale with Scrapy

Prerequisites

We will get our hands dirty implementing an assortment of simple web-crawling tools. To follow along with the code—which is the point—will need some familiarity with Python and Jupyter Notebooks. If you haven't programmed in Python or haven’t used Jupyter Notebooks, please do some self-teaching before this workshop using resources like those listed below.

Getting started & software prerequisites

For simplicity, just click the "Launch Binder" button (at the top of this Readme) to create a virtual environment ready for this workshop. It may take a few minutes; if it takes longer than 10, try again.

If you want to run the code on your computer, you have two options. You could use Anaconda to make installation easy: download Anaconda. Or if you already have Python 3.x installed with the full list of libraries listed under requirements.txt or don't mind installing everything in a virtual environment (best practice if working locally), you're welcome to clone this repository and follow along on your own machine. You can also install all the necessary packages like so:

pip3 install -r requirements.txt

Open-Access Resources

  • Slides (also in folder above)

Python and Jupyter Notebooks

Web-crawling with Scrapy & friends

Other useful libraries

O'Reilly books on scraping

These are available free for some universities, like Georgetown or UC Berkeley (log in here then search for books)

Contributing

If you spot a problem with these materials, please make an issue describing the problem or contact Jaren at jhaber@berkeley.edu. If you want to suggest additional resources or materials, please branch and make a pull request!

Acknowledgments

About

An introduction to web-crawling/scraping for beginners with some Python know-how. Created for IC2S2 Summer 2022 by Jaren Haber, PhD

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 97.0%
  • Python 3.0%