Origami Repo Data
Get information about Origami repositories. See the production service for API information.
Table Of Contents
- Running Locally
- Operational Documentation
If you're working on a Mac, the simplest way to install PostgreSQL is to use Homebrew. Run the following and pay attention to the instructions output after installing:
brew install postgresql
Before we can run the application, we'll need to install dependencies:
Create a local PostgreSQL database, you may need to provide credentials for the following command depending on your local setup:
Now you'll need to migrate the database, which sets up the required tables. You'll also need to run this command if you pull commits which include new database migrations:
Run the application in development mode with:
Now you can access the app over HTTP on port
We configure Origami Repo Data using environment variables. In development, configurations are set in a
.env file. In production, these are set through Heroku config. Further documentation on the available options can be found in the Origami Service documentation.
One time only
ENABLE_SETUP_STEP: Set to
truein order to allow the creation of an admin key using the
/v1/setupendpoint. Once a key has been created this way, this configuration should be removed for security reasons.
DATABASE_URL: A PostgreSQL connection string, with write permission on a database
GITHUB_AUTH_TOKEN: A GitHub auth token which has read access to all Financial Times repositories.
NODE_ENV: The environment to run the application in. One of
test(for use in automated tests).
PORT: The port to run the application on.
Required in Heroku
CMDB_API_KEY: The CMDB API key to use when updating health checks and the application runbook
FASTLY_PURGE_API_KEY: A Fastly API key which is used to purge URLs (when somebody POSTs to the
GRAPHITE_API_KEY: The FT's internal Graphite API key.
PURGE_API_KEY: The API key to require when somebody POSTs to the
/purgeendpoint. This should be a non-memorable string, for example a UUID
REGION: The region the application is running in. One of
RELEASE_LOG_API_KEY: The change request API key to use when creating and closing release logs
RELEASE_LOG_ENVIRONMENT: The Salesforce environment to include in release logs. One of
SENTRY_DSN: The Sentry URL to send error information to.
SLACK_ANNOUNCER_AUTH_TOKEN: The Slack auth token to use when announcing new repo versions on Slack
SLACK_ANNOUNCER_CHANNEL_ID: The Slack channel to announce new repo versions in (unique ID, not channel name)
GRAFANA_API_KEY: The API key to use when using Grafana push/pull
The service can also be configured by sending HTTP headers, these would normally be set in your CDN config:
FT-Origami-Service-Base-Path: The base path for the service, this gets prepended to all paths in the HTML and ensures that redirects work when the CDN rewrites URLs.
Most of the files which are used in maintaining your local database are in the
data folder of this repo. This is split into migrations and seed data.
You can use the following commands to manage your local database:
make db-migrate-up # migrate up to the latest version of the schema make db-migrate-down # revert the last applied migration make db-seed # add seed data to the database for local testing
To create a new migration file, you'll need to run:
This will generate a file in
data/migration which you can update to include
down migrations. We use Knex for migrations, copying from an existing file may help.
Seed data for local development is in
data/seed/demo. Every file in this directory will be used to seed the database when
make db-seed is run.
The source documentation for the runbook and healthcheck endpoints (EU/US) are stored in the
operational-documentation folder. These files are pushed to CMDB upon every promotion to production. You can push them to CMDB manually by running the following command:
The tests are split into unit tests and integration tests. To run tests on your machine you'll need to install Node.js and run
make install. Then you can run the following commands:
make test # run all the tests make test-unit # run the unit tests make test-integration # run the integration tests
You can run the unit tests with coverage reporting, which expects 90% coverage or more:
make test-unit-coverage verify-coverage
The code will also need to pass linting on CI, you can run the linter locally with:
To run the integration tests, you'll need a local PostgreSQL database named
origami-repo-data-test. You can set this up with:
We run the tests and linter on CI, you can view results on CircleCI.
make test and
make lint must pass before we merge a pull request.
The production (EU/US) and QA applications run on Heroku. We deploy continuously to QA via CircleCI, you should never need to deploy to QA manually. We use a Heroku pipeline to promote QA deployments to production.
You can promote either through the Heroku interface, or by running the following command locally:
- Grafana dashboard: graph memory, load, and number of requests
- Pingdom check (Production EU): checks that the EU production app is responding
- Pingdom check (Production US): checks that the US production app is responding
- Sentry dashboard (Production): records application errors in the production app
- Sentry dashboard (QA): records application errors in the QA app
- Splunk (Production): query application logs
We've outlined some common issues that can occur in the running of the Origami Repo Data:
What do I do if memory usage is high?
For now, restart the Heroku dynos:
heroku restart --app origami-repo-data-eu heroku restart --app origami-repo-data-us
If this doesn't help, then a temporary measure could be to add more dynos to the production applications, or switch the existing ones to higher performance dynos.
What if I need to deploy manually?
If you really need to deploy manually, you should only do so to QA (production deploys should always be a promotion). Use the following command to deploy to QA manually:
The Financial Times has published this software under the MIT license.