Skip to content
Branch: master
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore fixed etl script to output correct format for jobs to skills mapping Aug 16, 2016
requirements.txt added configuration object to application context Jun 26, 2016 Add tox, fix job normalizer test Oct 14, 2016
tox.ini Add tox, fix job normalizer test Oct 14, 2016

Open Skills API - Sharing the DNA of America's Jobs

Provides a complete and standard data store for canonical and emerging skills, knowledge, abilities, tools, technolgies, and how they relate to jobs.


A web application to serve the Open Skills API.

An overview of the API is maintained in this repository's Wiki: API Overview

Loading Data

The data necessary to drive the Open Skills API is loaded through the tasks present in the skills-airflow project. Follow the instructions in that repo to run the workflow and load data into a database, along with an Elasticsearch endpoint. You will use the database credentials and Elasticsearch endpoint when configuring this application.


  • Python 2.7.11
  • Postgres database with skills and jobs data loaded. (see skills-airflow note above)
  • Elasticsearch 5.x instance with job normalization data loaded (see skills-airflow note above)


To run the API locally, please perform the following steps:

  1. Clone the repository from
$ git clone
  1. Navigate to the checked out project
$ cd skills-api
  1. Ensure that pip package manager is installed. See installation instructions here.
$ pip --version
  1. Install the virtualenv package. Please review the documentation if you are unfamiliar with how virtualenv works.
$ pip install virtualenv
  1. Create a Python 2.7.11 virtual environment called venv in the project root directory
$ virtualenv -p /path/to/python/2.7.11 venv
  1. Activate the virtual environment. Note that the name of the virtual environment (venv) will be appended to the front of the command prompt.
$ source venv/bin/activate 
(venv) $
  1. Install dependencies from requirements.txt
$ pip install -r requirements.txt
  1. Make regular (development) config. Run bin/ and fill in connection string to the database used in skills-airflow.
$ bin/
  1. Add an ELASTICSEARCH_HOST variable to config/ to point to the Elasticsearch instance that holds the job normalization data from skills-airflow

  2. Clone development config for test config. Copy the resultant config/ to config/ and modify the SQL connection string to match your test database (you can leave this the same as your development database, if you wish, but we recommend keeping separate ones.

$ cp config/ config/

Now you can run the Flask server.

(venv) $ python runserver

Navigate to and you should see a listing of jobs. You can check out more endpoints at the API Specification

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.