Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Scrapy, a fast high-level web crawling & scraping framework for Python.
Python Other

Merge pull request #1302 from eliasdorneles/improving-access-settings…

…-docs

[MRG+1] Improvements for docs on how to access settings
latest commit 8b3ca4f250
@curita curita authored
Failed to load latest commit information.
artwork added artwork files properly now
debian Generated version as pep440 and dpkg compatible
docs Merge pull request #1302 from eliasdorneles/improving-access-settings…
extras removed SUFFIX from scrapy name package
scrapy interpreting application/x-json as TextResponse
sep mark SEP-019 as Final
tests interpreting application/x-json as TextResponse
.bumpversion.cfg Bump version: 1.0.0rc1 → 1.1.0dev1
.coveragerc Added rules to Makefile.buildbot for generating coverage reports
.gitignore minor corrections in documentation.
.travis-workarounds.sh Upload sdist and wheel packages to pypi using travis-ci deploys
.travis.yml Extend regex for tags that deploy to PyPI to support new release cycle
AUTHORS added Nicolas Ramirez to AUTHORS
CONTRIBUTING.md Contribute to master branch
INSTALL fix link to online installation instructions
LICENSE mv scrapy/trunk to root as part of svn2hg migration
MANIFEST.in ENH: include tests/ to source distribution in MANIFEST.in
Makefile.buildbot Generated version as pep440 and dpkg compatible
NEWS added NEWS file pointing to docs/news.rst
README.rst favoring web scraping over screen scraping in the descriptions
conftest.py scrapy/spider.py shim
pytest.ini Don't collect tests by their class name
requirements.txt update other places where w3lib version is mentioned
setup.cfg remove no longer existent examples from doc_files used in bdist_rpm. …
setup.py favoring web scraping over screen scraping in the descriptions
tox.ini Create separate testenvs to build docs and check links

README.rst

Scrapy

PyPI Version Build Status Wheel Status

Overview

Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

For more information including a list of features check the Scrapy homepage at: http://scrapy.org

Requirements

  • Python 2.7
  • Works on Linux, Windows, Mac OSX, BSD

Install

The quick way:

pip install scrapy

For more details see the install section in the documentation: http://doc.scrapy.org/en/latest/intro/install.html

Releases

You can download the latest stable and development releases from: http://scrapy.org/download/

Documentation

Documentation is available online at http://doc.scrapy.org/ and in the docs directory.

Community (blog, twitter, mail list, IRC)

See http://scrapy.org/community/

Contributing

See http://doc.scrapy.org/en/master/contributing.html

Companies using Scrapy

See http://scrapy.org/companies/

Commercial Support

See http://scrapy.org/support/

Something went wrong with that request. Please try again.