Skip to content

Commit

Permalink
Update README [Resolves #23] (#26)
Browse files Browse the repository at this point in the history
  • Loading branch information
thcrock committed Apr 19, 2017
1 parent 9db07d0 commit a0c2972
Showing 1 changed file with 19 additions and 14 deletions.
33 changes: 19 additions & 14 deletions README.md
Expand Up @@ -3,22 +3,24 @@ Provides a complete and standard data store for canonical and emerging skills,
knowledge, abilities, tools, technolgies, and how they relate to jobs.

## Overview
TODO - Write a sweet, sweet overview of the API
A web application to serve the [Open Skills API](http://api.dataatwork.org/v1/spec/).

## Demo
The following endpoints were automatically deployed via the Zappa framework to Amazon Web Services.
- [https://vhbg2y4qug.execute-api.us-east-1.amazonaws.com/dev/jobs](https://vhbg2y4qug.execute-api.us-east-1.amazonaws.com/dev/jobs)
- [https://vhbg2y4qug.execute-api.us-east-1.amazonaws.com/dev/skills](https://vhbg2y4qug.execute-api.us-east-1.amazonaws.com/dev/skills)
An overview of the API is maintained in this repository's Wiki: [API Overview](https://github.com/workforce-data-initiative/skills-api/wiki/API-Overview)


## Loading Data
The data necessary to drive the Open Skills API is loaded through the tasks present in the [skills-airflow](https://github.com/workforce-data-initiative/skills-airflow/) project. Follow the instructions in that repo to run the workflow and load data into a database, along with an Elasticsearch endpoint. You will use the database credentials and Elasticsearch endpoint when configuring this application.

## Dependencies
- Python 2.7.11
- Postgres database
- Postgres database with skills and jobs data loaded. (see skills-airflow note above)
- Elasticsearch 5.x instance with job normalization data loaded (see skills-airflow note above)

## Installation
To run the API locally, please perform the following steps:
1. Clone the repository from [https://www.github.com/dssg/skills-api](https://www.github.com/dssg/skills-api)
1. Clone the repository from [https://www.github.com/workforce-data-initiative/skills-api](https://www.github.com/workforce-data-initiative/skills-api)
```
$ git clone https://www.github.com/dssg/skills-api
$ git clone https://www.github.com/workforce-data-initiative/skills-api
```
2. Navigate to the checked out project
```
Expand Down Expand Up @@ -46,18 +48,21 @@ $ source venv/bin/activate
$ pip install -r requirements.txt
```

8. Make regular (development) config. Run bin/make_config.sh and fill in connection string to database.
8. Make regular (development) config. Run bin/make_config.sh and fill in connection string to the database used in skills-airflow.
```
$ bin/make_config.sh
```

9. Clone development config for test config. Copy the resultant config/config.py to config/test_config.py and modify the SQL connection string to match your test database (you can leave this the same as your development database, if you wish, but we recommend keeping separate ones.
9. Add an ELASTICSEARCH_HOST variable to config/config.py to point to the Elasticsearch instance that holds the job normalization data from skills-airflow

10. Clone development config for test config. Copy the resultant config/config.py to config/test_config.py and modify the SQL connection string to match your test database (you can leave this the same as your development database, if you wish, but we recommend keeping separate ones.
```
$ cp config/config.py config/test_config.py
```

## Deployment to Amazon Web Services
TODO - Write some deployment instructions here
Now you can run the Flask server.
```
(venv) $ python server.py runserver
```

## Dependencies
TODO - Write some dependencies.
Navigate to `http://127.0.0.1:5000/v1/jobs` and you should see a listing of jobs. You can check out more endpoints at the [API Specification](http://127.0.0.1:5000/v1/jobs)

0 comments on commit a0c2972

Please sign in to comment.