Skip to content
This repository has been archived by the owner on Feb 6, 2019. It is now read-only.
/ hdxscraper-fao Public archive

FAO Food Aid Shipments and Food Security Data Collector

License

Notifications You must be signed in to change notification settings

nerevu/hdxscraper-fao

Repository files navigation

FAO Food Aid Shipments and Food Security Data Collector

HDX collector for FAO Data.

Introduction

hdxscraper-fao operates in the following way:

  • Downloads faostat-bulkdownloads zip files
  • Extracts and normalizes the csv file
  • Places the resulting data for each file into a separate database table

With hdxscraper-fao, you can

  • Save FAO Data to an external database
  • Create CKAN datasets/packages for each database table
  • Upload ScraperWiki generated CSV files into a CKAN instance
  • Update resources previously uploaded to CKAN with new metadata

View the live data

Requirements

hdxscraper-fao has been tested on the following configuration:

  • MacOS X 10.9.5
  • Python 2.7.10

hdxscraper-fao requires the following in order to run properly:

Setup

local

(You are using a virtualenv, right?)

git clone https://github.com/reubano/hdxscraper-fao.git
pip install -r requirements.txt
manage setup

ScraperWiki Box

rm -rf tool
git clone https://github.com/reubano/hdxscraper-fao.git tool
cd tool
make setup

Usage

local

manage run

ScraperWiki Box

cd tool
source venv/bin/activate
screen manage -m Scraper run
# Now press `Ctrl-a d`

The results will be stored in a SQLite database scraperwiki.sqlite.

view all available commands

manage

Upload tables to HDX/CKAN

upload to production site

manage upload

upload to staging site

manage upload -s

Update ScraperWiki box with new code

cd tool
make update
source venv/bin/activate
screen manage -m Scraper run
# Now press `Ctrl-a d`

Configuration

hdxscraper-fao will use the following Environment Variables if set:

Environment Variable Description
CKAN_API_KEY Your CKAN API Key
CKAN_PROD_URL Your CKAN instance remote production url
CKAN_REMOTE_URL Your CKAN instance remote staging url
CKAN_USER_AGENT Your user agent

Creating a new collector

If you would like to create collector or scraper from scratch, check out cookiecutter-collector.

pip install cookiecutter
cookiecutter https://github.com/reubano/cookiecutter-collector.git

Contributing

Code

  1. fork
  2. commit
  3. submit PR
  4. ???
  5. PROFIT!!!

Document

  • improve this readme
  • add comments to confusing parts of the code
  • write a "Getting Started" guide
  • write additional deployment instructions (Heroku, AWS, [Digital Ocean](http://digitalocean.com/, GAE)

QA

  1. follow this guide and see if everything works as expected
  2. if something doesn't work, please submit an issue

License

hdxscraper-fao is distributed under the MIT License.