Skip to content
Scrapy, a fast high-level web crawling & scraping framework for Python.
Python Other
Latest commit 80c296e Jun 14, 2016 @kmike kmike committed on GitHub Merge pull request #2048 from redapple/bs4-faq
[MRG] Add FAQ entry on using BeautifulSoup in spider callbacks
Failed to load latest commit information.
artwork added artwork files properly now Mar 20, 2012
debian Merge pull request #934 from Dineshs91/zsh-support Jul 31, 2015
docs Merge pull request #2048 from redapple/bs4-faq Jun 14, 2016
extras Merge pull request #934 from Dineshs91/zsh-support Jul 30, 2015
scrapy Merge pull request #2008 from foromer4/master Jun 6, 2016
sep Spelling fixes Dec 13, 2015
tests fix issue with '' in python 3 May 25, 2016
.bumpversion.cfg Allow more pre-releases with bumpversion Apr 21, 2016
.coveragerc Add coverage report trough codecov.io Aug 13, 2015
.gitignore add coverage files to gitignore Aug 26, 2015
.travis.yml Enable travis builds on tag patterns Feb 3, 2016
AUTHORS added Nicolas Ramirez to AUTHORS Mar 14, 2013
CODE_OF_CONDUCT.md Add Code of Conduct Version 1.3.0 from http://contributor-covenant.org/ Jan 15, 2016
CONTRIBUTING.md Put a blurb about support channels in CONTRIBUTING Jul 24, 2015
INSTALL fix link to online installation instructions Oct 2, 2012
LICENSE mv scrapy/trunk to root as part of svn2hg migration May 6, 2009
MANIFEST.in ENH: include tests/ to source distribution in MANIFEST.in Jun 25, 2015
Makefile.buildbot Generated version as pep440 and dpkg compatible Jun 16, 2015
NEWS added NEWS file pointing to docs/news.rst Apr 28, 2012
README.rst Added Python 3.3+ to readme May 11, 2016
conftest.py Simplify if statement Jan 18, 2016
pytest.ini Don't collect tests by their class name May 4, 2015
requirements-py3.txt Bump w3lib version dependency in setup.py Apr 29, 2016
requirements.txt Bump up w3lib requirement to v1.14.2 Apr 20, 2016
setup.cfg Build universal wheels Mar 1, 2016
setup.py Bump w3lib version dependency in setup.py Apr 29, 2016
tox.ini Add support for Sphinx 1.4 Mar 30, 2016

README.rst

Scrapy

PyPI Version PyPI Monthly downloads Build Status Wheel Status Python 3 Porting Status Coverage report

Overview

Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

For more information including a list of features check the Scrapy homepage at: http://scrapy.org

Requirements

  • Python 2.7 or Python 3.3+
  • Works on Linux, Windows, Mac OSX, BSD

Install

The quick way:

pip install scrapy

For more details see the install section in the documentation: http://doc.scrapy.org/en/latest/intro/install.html

Releases

You can download the latest stable and development releases from: http://scrapy.org/download/

Documentation

Documentation is available online at http://doc.scrapy.org/ and in the docs directory.

Community (blog, twitter, mail list, IRC)

See http://scrapy.org/community/

Contributing

Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md).

By participating in this project you agree to abide by its terms. Please report unacceptable behavior to opensource@scrapinghub.com.

See http://doc.scrapy.org/en/master/contributing.html

Companies using Scrapy

See http://scrapy.org/companies/

Commercial Support

See http://scrapy.org/support/

Something went wrong with that request. Please try again.