- Live version: http://survey.internationalbudget.org
Developed in collaboration between the International Budget Partnership and the Open Knowledge Foundation. Written by Tom Rees, Hélène Durand, Tryggvi Björgvinsson, and Damjan Velickovski.
This codebase contains two applications which function together from the end-
user perspective. The two applications are explorer and tracker. They are
logically separated in the code and served together through a central express
Node app, on different routes (see /app.js for details).
Explorer is the biggest part of the web application, representing most of the
endpoints, and is served from the root route - /.
The explorer application is a static backbone app
(served through express), built using webpack. Its data is built up from static files stored in the ./data directory. See below for more details.
The Tracker app is concerned with the 'Document Availability' page and is served from the /availability route. It is an express app. Its data is retrieved during runtime from an external API using the separate ibp-explorer-data-client app.
In addition to the explorer and tracker applications, there's another small static app to serve the questionnaire review pages.
A page for each country in the survey is built, with questions and answers from the survey questionnaire, for ease of review. These can be accessed with a username and password at /questionnaires. These pages are built, each time the app is deployed, from data defined in a .csv file hosted on Google Sheets.
The questionnaire data spreadsheet id, and the username and password are set as env vars as defined below.
The static pages are built using Metalsmith into /_build-questionnaires and served as a static site from the central express app.
To run locally:
- Clone this repository.
- Install Node.js.
- Set the environment variables needed for ibp-explorer-data-client in
.env. - Run
npm installin the root directory of this repo to install dependencies. - Run
npm run build:devto bundle the front-end for the explorer, build the tracker, and a small sample of the questionnaire pages. If you want to watch for code changes usenpm run build:dev:watch. This will also start the server.- Run
npm run build:dev:trackerornpm run build:dev:tracker:watchto do the same only for the tracker. - Run
npm run build:dev:explorerornpm run build:dev:explorer:watchto do the same only for the explorer. - Run
npm run build:questionnaires:devto build only the questionnaires.
- Run
- Run
npm run startto start the node server. - Point your browser at http://localhost:3000
To deploy:
- Get the above working.
- Kill any running processes from
ibp-explorer. - Set production
PORT - Run
npm run build:prod. This will build a minified version of the tracker, explorer, and all the questionnaire review pages.
PORT- port on which the server will listen. Default is 3000.TRACKER_LAST_UPDATE- date to be displayed on the Availability page when the last API update occurred
You will need to set additional environment variables needed by ibp-explorer-data-client
- For calls to Indaba API
API_BASE- Base URL for the APIAPI_USERNMAE- Username for the APIAPI_PASSWORD- Password for the API
- Google Drive files/folders
SERVICE_CREDENTIALS- Google Service JSON token. You can doexport SERVICE_CREDENTIALS=`cat <path_to_credentials.json>`DRIVE_ROOT- Which gdrive folder serves as root when searching for documents
- AWS S3 storage
AWS_ACCESS_KEY_ID- Your access keyAWS_SECRET_ACCESS_KEY- Your secret access keyAWS_REGION- Region where the bucket isAWS_BUCKET- Name of the bucket where to store snapshots
- Google Drive Library reindexing
DRIVE_ROOT- ID of the root where the documents should be searchedSPREADSHEET_ID- ID of the spreadsheet where the found documents should be written
- Questionnaire
QUESTIONNAIRE_AUTH- username and password used to restricted access to questionnaire urls, in the formusername:password.QUESTIONNAIRE_SPREADSHEET_ID- Google Sheets spreadsheet ID representing the questionnaire data source.
To test:
- Run webpack-dev-server with
npm run start - Run
npm run test
All the data lives in the ./data folder, along with a Python tool to Extract-Transform-Load it through a complicated data-massage. Outputs are:
./vendor/ibp_dataset.jswhich is used by the javascript datatool../app/assets/downloads/which is filled with downloadable files.
To update the data:
- Modify the Excel files in the
./datafolder.
To get those changes processed by the tool:
- Get Python set up on your system.
- Install Pip, the Python package manager.
pip install openpyxlpip install unicodecsvpip install xlrd- You're all set up. Run
python etl.pyto update the tool. - Run the tool locally to prove it works.
- Follow the above deployment instructions to get it ready for a live server.
npm run extract-potto extract all the strings for translations into a .pot filenpm run merge-poto merge the new strings for translation into the existing po files- Update the translations in the .po files
npm run compile-jsonto compile the .po files to json message files which the app uses