Scrapy, a fast high-level web crawling & scraping framework for Python.
Python Other
Latest commit 7b49b9c Mar 1, 2017 @kmike kmike committed on GitHub Merge pull request #2590 from rolando-contrib/handle-data-loss-gracef…
…ully

[MRG+2] Handle data loss gracefully.
Permalink
Failed to load latest commit information.
artwork changing README to README.rst Jan 23, 2017
debian Merge pull request #934 from Dineshs91/zsh-support Jul 30, 2015
docs Merge pull request #2590 from rolando-contrib/handle-data-loss-gracef… Mar 1, 2017
extras Update scrapy.1 Jan 25, 2017
scrapy Handle data loss gracefully. Mar 1, 2017
sep changing README to README.rst Jan 23, 2017
tests Handle data loss gracefully. Mar 1, 2017
.bumpversion.cfg Merge pull request #2159 from scrapy/remove-prerelease-configuration Feb 20, 2017
.coveragerc Add coverage report trough codecov.io Aug 13, 2015
.gitignore add a couple more lines to gitignore Feb 13, 2017
.travis.yml Enable PyPy tests on Travis Feb 1, 2017
AUTHORS added Nicolas Ramirez to AUTHORS Mar 14, 2013
CODE_OF_CONDUCT.md update code of conduct http://contributor-covenant.org/version/1/4 Dec 27, 2016
CONTRIBUTING.md Put a blurb about support channels in CONTRIBUTING Jul 24, 2015
INSTALL fix link to online installation instructions Oct 2, 2012
LICENSE mv scrapy/trunk to root as part of svn2hg migration May 6, 2009
MANIFEST.in Ignore explicitly compiled python files. Nov 8, 2016
Makefile.buildbot Generated version as pep440 and dpkg compatible Jun 16, 2015
NEWS added NEWS file pointing to docs/news.rst Apr 29, 2012
README.rst DOC remove “Python 3 progress” badge Feb 15, 2017
conftest.py Simplify if statement Jan 18, 2016
pytest.ini Don't collect tests by their class name May 4, 2015
requirements-py3.txt require w3lib 1.17+ Feb 14, 2017
requirements.txt require w3lib 1.17+ Feb 14, 2017
setup.cfg Build universal wheels Mar 1, 2016
setup.py require w3lib 1.17+ Feb 14, 2017
tox.ini TST replace Ubuntu 12.04 tox environment with 14.04 Feb 13, 2017

README.rst

Scrapy

PyPI Version Build Status Wheel Status Coverage report Conda Version

Overview

Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

For more information including a list of features check the Scrapy homepage at: http://scrapy.org

Requirements

  • Python 2.7 or Python 3.3+
  • Works on Linux, Windows, Mac OSX, BSD

Install

The quick way:

pip install scrapy

For more details see the install section in the documentation: http://doc.scrapy.org/en/latest/intro/install.html

Releases

You can download the latest stable and development releases from: http://scrapy.org/download/

Documentation

Documentation is available online at http://doc.scrapy.org/ and in the docs directory.

Community (blog, twitter, mail list, IRC)

See http://scrapy.org/community/

Contributing

See http://doc.scrapy.org/en/master/contributing.html

Code of Conduct

Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md).

By participating in this project you agree to abide by its terms. Please report unacceptable behavior to opensource@scrapinghub.com.

Companies using Scrapy

See http://scrapy.org/companies/

Commercial Support

See http://scrapy.org/support/