A project to generate point of interest (POI) data sourced from websites with 'store location' pages. The project uses scrapy
, a popular Python-based web scraping framework, to execute individual site spiders that retrieve POI data, publishing the results in a standard format. There are various scrapy
tutorials on the Internet and this series on YouTube is reasonable.
Windows users may need to follow some extra steps, please follow the scrapy docs for up to date details.
These instructions were tested with Ubuntu 22.04.1 LTS on 2024-02-21.
-
Install Python 3 and
pip
:$ sudo apt-get update $ sudo apt-get install -y python3 python3-pip python-is-python3
-
Install
pyenv
and ensure the correct version of Python is available. The following is a summary of the steps, please refer to the pyenv documentation for the most up-to-date instructions.$ sudo apt-get install -y build-essential libssl-dev zlib1g-dev \ libbz2-dev libreadline-dev libsqlite3-dev curl git \ libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev \ libffi-dev liblzma-dev $ curl https://pyenv.run | bash $ echo 'export PATH="$HOME/.pyenv/bin:$PATH"' >> ~/.bashrc $ echo 'eval "$(pyenv init --path)"' >> ~/.bashrc $ echo 'eval "$(pyenv init -)"' >> ~/.bashrc $ exec "$SHELL" $ pyenv install 3.11
-
Install
pipenv
and check that it runs:$ pip install --user pipenv $ pipenv --version pipenv, version 2023.12.1
-
Clone a copy of the project from the All the Places repo (or your own fork if you are considering contributing to the project):
$ git clone git@github.com:alltheplaces/alltheplaces.git
-
Use
pipenv
to install the project dependencies:$ cd alltheplaces $ pipenv sync
-
Test for successful project installation:
$ pipenv run scrapy
If the above runs without complaint, then you have a functional installation and are ready to run and write spiders.
These instructions were tested with macOS 14.3.1 on 2024-02-21.
-
Install Python 3 and
pip
:$ brew install python@3
-
Install
pyenv
and ensure the correct version of Python is available. The following is a summary of the steps, please refer to the pyenv documentation for the most up-to-date instructions.$ brew install pyenv $ echo 'eval "$(pyenv init --path)"' >> ~/.zshrc $ echo 'eval "$(pyenv init -)"' >> ~/.zshrc $ exec "$SHELL" $ pyenv install 3.11
-
Install
pipenv
and check that it runs:$ brew install pipenv $ pipenv --version pipenv, version 2023.12.1
-
Clone a copy of the project from the All the Places repo (or your own fork if you are considering contributing to the project):
$ git clone git@github.com:alltheplaces/alltheplaces.git
-
Use
pipenv
to install the project dependencies:$ cd alltheplaces $ pipenv sync
-
Test for successful project installation:
$ pipenv run scrapy
If the above runs without complaint, then you have a functional installation and are ready to run and write spiders.
You can use GitHub Codespaces to run the project. This is a cloud-based development environment that is created from the project's repository and includes a pre-configured environment with all the tools you need to develop the project. To use Codespaces, click the button below:
You can use Docker to run the project. This is a container-based development environment that is created from the project's repository and includes a pre-configured environment with all the tools you need to develop the project.
-
Clone a copy of the project from the All the Places repo (or your own fork if you are considering contributing to the project):
$ git clone git@github.com:alltheplaces/alltheplaces.git
-
Build the Docker image:
$ cd alltheplaces $ docker build -t alltheplaces .
-
Run the Docker container:
$ docker run -it alltheplaces
Many of the sites provide their data in a standard format. Others export their data via simple APIs. We have a number of guides to help you develop spiders:
- What should I call my spider?
- Using Wikidata and the Name Suggestion Index
- Sitemaps make finding POI pages easier
- Data from many POI pages can be extracted without writing code
- What is expected in a pull request?
- What we do behind the scenes
The output from running the project is published on a regular cadence to our website: alltheplaces.xyz. You should not run all the spiders to pick up the output: the less the project "bothers" a website the more we will be tolerated.
Communication is primarily through tickets on the project GitHub issue tracker. Many contributors are also present on OSM US Slack, in particular we watch the #poi channel.
The data generated by our spiders is provided on our website and released under Creative Commons’ CC-0 waiver.
The spider software that produces this data (this repository) is licensed under the MIT license.