Scrapy, a fast high-level web crawling & scraping framework for Python.
Python Other
Latest commit 86c322c Dec 13, 2017 @dangra dangra Merge pull request #3038 from scrapy/update-contributing-docs
DOC update "Contributing" docs
Permalink
Failed to load latest commit information.
artwork Use https for external links wherever possible in docs Oct 26, 2017
debian Use https links wherever possible Oct 28, 2017
docs DOC mention an easier way to track pull requests locally. Dec 12, 2017
extras Use https for external links wherever possible in docs Oct 26, 2017
scrapy Merge pull request #3011 from Jane222/master Dec 8, 2017
sep Fix link for 'XPath and XSLT with lxml' Oct 28, 2017
tests Merge pull request #3011 from Jane222/master Dec 8, 2017
.bumpversion.cfg Bump version: 1.3.2 → 1.4.0 May 18, 2017
.coveragerc Add coverage report trough codecov.io Aug 13, 2015
.gitignore add a couple more lines to gitignore Feb 13, 2017
.travis.yml Use portable pypy directly Nov 1, 2017
AUTHORS added Nicolas Ramirez to AUTHORS Mar 14, 2013
CODE_OF_CONDUCT.md update code of conduct http://contributor-covenant.org/version/1/4 Dec 27, 2016
CONTRIBUTING.md Use https links wherever possible Oct 28, 2017
INSTALL Use https links wherever possible Oct 28, 2017
LICENSE [PEDANTIC] FIX trailing whitespaces in LICENSE. Apr 21, 2017
MANIFEST.in Ignore explicitly compiled python files. Nov 8, 2016
Makefile.buildbot Generated version as pep440 and dpkg compatible Jun 16, 2015
NEWS added NEWS file pointing to docs/news.rst Apr 29, 2012
README.rst Use https links wherever possible Oct 28, 2017
codecov.yml codecov config: disable project check, tweak PR comments May 18, 2017
conftest.py Simplify if statement Jan 18, 2016
pytest.ini Don't collect tests by their class name May 4, 2015
requirements-py3.txt Bump Twisted requirement to 17.9.0 to catch many Python 3 fixes. Sep 23, 2017
requirements.txt require w3lib 1.17+ Feb 14, 2017
setup.cfg Build universal wheels Mar 1, 2016
setup.py Use https links wherever possible Oct 28, 2017
tox.ini Fix link for Tox Oct 28, 2017

README.rst

Scrapy

PyPI Version Build Status Wheel Status Coverage report Conda Version

Overview

Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

For more information including a list of features check the Scrapy homepage at: https://scrapy.org

Requirements

  • Python 2.7 or Python 3.3+
  • Works on Linux, Windows, Mac OSX, BSD

Install

The quick way:

pip install scrapy

For more details see the install section in the documentation: https://doc.scrapy.org/en/latest/intro/install.html

Documentation

Documentation is available online at https://doc.scrapy.org/ and in the docs directory.

Releases

You can find release notes at https://doc.scrapy.org/en/latest/news.html

Community (blog, twitter, mail list, IRC)

See https://scrapy.org/community/

Contributing

See https://doc.scrapy.org/en/master/contributing.html

Code of Conduct

Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md).

By participating in this project you agree to abide by its terms. Please report unacceptable behavior to opensource@scrapinghub.com.

Companies using Scrapy

See https://scrapy.org/companies/

Commercial Support

See https://scrapy.org/support/