Skip to content
Data @ the point of care application
Java HTML Ruby CSS Shell JavaScript Other
Branch: master
Clone or download
ronaldheft-usds DPC-765: Obtain SSM secrets natively with ECS (#482)
* DPC-765: Test if parameter override is actually working

* DPC-765: Switch config to use environment variables

* DPC-765: Remove dead config

* DPC-765: Fix typo in environment variable

* DPC-765: Fix another env var typo

* DPC-765: Fix missing environment variable for prod-sbx environment

* DPC-765: Fix using environment variables for configuration
Latest commit 09e3a0e Dec 13, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Add ops reminders to PR template (#328) Oct 24, 2019
bbcerts Merge branch 'master' into development May 17, 2019
docker DPC-653: Create custom Docker image for services (#345) Oct 31, 2019
dpc-aggregation DPC-765: Obtain SSM secrets natively with ECS (#482) Dec 13, 2019
dpc-api DPC-765: Obtain SSM secrets natively with ECS (#482) Dec 13, 2019
dpc-attribution DPC-765: Obtain SSM secrets natively with ECS (#482) Dec 13, 2019
dpc-bluebutton Bump mockserverVersion from 5.7.2 to 5.8.0 (#462) Dec 2, 2019
dpc-common DPC-439: Implementing streaming file downloads (#417) Dec 13, 2019
dpc-consent DPC-439: Implementing streaming file downloads (#417) Dec 13, 2019
dpc-macaroons Remove unused imports throughout codebase (#425) Nov 26, 2019
dpc-queue DPC-439: Implementing streaming file downloads (#417) Dec 13, 2019
dpc-smoketest DPC-585: Have smoke tests cleanup after themselves (#484) Dec 13, 2019
dpc-testing DPC-857: Add create, update, and delete operations for Endpoint resou… Dec 13, 2019
dpc-web Remove Bootstrap: We are not using it (#485) Dec 12, 2019
ig Updated bene_id to new value for BFD DPR. (#451) Nov 27, 2019
src DPC-439: Implementing streaming file downloads (#417) Dec 13, 2019
.codeclimate.yml Add bundler audit to CI (#222) Dec 2, 2019
.dockerignore Re-enable IG Publisher Build (#391) Nov 15, 2019
.editorconfig Add an editorconfig file. (#259) Oct 7, 2019
.gitignore Resolve Smoke Test Issues (#360) Nov 6, 2019
.java-version add normalize, rouge, and font files (#70) Jun 24, 2019
.travis.yml DPC-679: Added package.json file (#273) Oct 9, 2019
LICENSE.md DPC-100: Opensource project (#23) Mar 15, 2019
Makefile Update smoke tests to work with SMART Backend Services (#370) Nov 12, 2019
README.md DPC-808: Add environment variable overriding to Docker containers (#384) Nov 15, 2019
application.local.conf.sample Sandbox documentation updates (#424) Nov 22, 2019
docker-compose.base.yml DPC-653: Create custom Docker image for services (#345) Oct 31, 2019
docker-compose.yml DPC-690: API for Consent Service (#359) Nov 15, 2019
dpc-test.sh Improve testing experience (#288) Oct 16, 2019
dpc-web-test.sh Add bundler audit to CI (#222) Dec 2, 2019
lgtm.yml
package-lock.json Bump newman from 4.5.6 to 4.5.7 (#476) Dec 9, 2019
package.json Bump newman from 4.5.6 to 4.5.7 (#476) Dec 9, 2019
pom.xml DPC-439: Implementing streaming file downloads (#417) Dec 13, 2019
requirements.txt DPC-727: Fix admin endpoints and smoke tests (#330) Nov 1, 2019

README.md

Data @ The Point of Care

Build Status Maintainability Test Coverage Total alerts Language grade: Java Language grade: JavaScript

Required services

DPC requires an external Postgres database to be running. While a separate Postgres server can be used, the docker-compose file includes everything needed, and can be started like so:

docker-compose up start_core_dependencies

Warning: If you do have an existing Postgres database running on port 5342, docker-compose WILL NOT alert you to the port conflict. Ensure any local Postgres databases are stopped before starting docker-compose.

By default, the application attempts to connect to the dpc_attribution, dpc_queue, and dpc_auth databases on the localhost as the postgres user with a password of dpc-safe. When using docker-compose, all the required databases will be created automatically. Upon container startup, the databases will be initialized automatically with all the correct data. If for some reason this behavior is not desired, set an environment variable of DB_MIGRATION=0.

The defaults can be overridden in the configuration files. Common configuration options (such as database connection strings) are stored in a server.conf file and included in the various modules via the include "server.conf" attribute in module application config files. See the dpc-attribution application.conf for an example.

Default settings can be overridden either directly in the module configurations, or via an application.local.conf file in the project root directory. For example, modifying the dpc-attribution configuration:

dpc.attribution {
  database = {
    driverClass = org.postgresql.Driver
    url = "jdbc:postgresql://localhost:5432/dpc-dev"
    user = postgres
  }
}

Note: On startup, the services look for a local override file (application.local.conf) in the root of their current working directory. This can create an issue when running tests with IntelliJ which by default sets the working directory to be the module root, which means any local overrides are ignored. This can be fixed by setting the working directory to the project root, but needs to done manually.

Building DPC

There are two ways to build DPC.

Note: DPC only supports Java 11 due to our use of new languages features, which prevents using older JDK versions. In addition, some of upstream dependencies have not been updated to support Java 12 and newer, but we plan on adding support at a later date.

Option 1: Full Integration Test

Run make ci-app. This will start the dependencies, build all components, run integration tests, and run a full end to end test. You will be left with compiled JARs for each component, as well as compiled Docker containers.

Option 2: Manually

Run make docker-base to build the common, baseline Docker image (i.e., dpc-base:latest) used across DPC services.

Then, run mvn clean install to build and test the application. Dependencies will need to be up and running for this option to succeed.

Running mvn clean install will also construct the Docker images for the individual services. To skip the Docker build pass -Djib.skip=True

Note that the dpc-base image produced by make docker-base is not stored in a remote repository. The mvn clean install process relies on the base image being available via the local Docker daemon.

Running DPC

Once the JARs are built, they can be run in two ways either via docker-compose or by manually running the JARs.

Running via Docker

The application (along with all required dependencies) can be automatically started with the following command: make start-app. Install Docker

The individual services can be started (along with their dependencies) by passing the service name to the up command.

docker-compose up {db,dpc-aggregation,dpc-attribution,dpc-api}

By default, the Docker containers start with minimal authentication enabled, meaning that some functionality (such as extracting the organization_id from the access token) will not work as expected and always return the same value. This can be overriding during startup by setting the AUTH_DISABLED=false environment variable.

Manual JAR execution

Alternatively, the individual services can be manually executing the server command for the various services.

Note: When manually running the individual services you'll need to ensure that there are no listening port collisions. By default, each service starts with the same application (8080) and admin (9900) ports. We provide a sample application.local.conf file which contains all the necessary configuration options. This file can be copied and used directly: cp application.local.conf.sample application.local.conf.

Note: The API service requires authentication before performing actions. This will cause most integration tests to fail, as they expect the endpoints to be open. Authentication can be disabled in one of two ways: Set the ENV environment variable to local (which is the default when running under Docker). Or, set dpc.api.authenticationDisabled=true in the config file (the default from the sample config file).

Next start each service in a new terminal window, from within the the dpc-app root directory.

java -jar dpc-attribution/target/dpc-attribution.jar server
java -jar dpc-aggregation/target/dpc-aggregation.jar server
java -jar dpc-api/target/dpc-api.jar server

By default, the services will attempt to load the local.application.conf file from the current execution directory. This can be overridden in two ways.

  1. Passing ENV={dev,test,prod} will load a {dev,test,prod}.application.conf file from the service resources directory.
  2. Manually specifying a configuration file after the server command server src/main/resources/application.conf will directly load that configuration set.

Note: Manually specifying a config file will disable the normal configuration merging process. This means that only the config variables directly specified in the file will be loaded, no other application.conf or reference.conf files will be processed.

  1. You can check that the application is running by requesting the FHIR CapabilitiesStatement for the dpc-api service, which will return a json formatted FHIR resource.
    curl -H "Accept: application/fhir+json" http://localhost:3002/v1/metadata

Seeding the database

Note: This step is not required when directly running the demo for the dpc-api service, which partially seeds the database on first execution.

By default, DPC initially starts with an empty attribution database, this means that no patients have been attributed to any providers and thus nothing can be exported from BlueButton.

In order to successfully test and demonstrate the application, there needs to be initial data loaded into the attribution database. We provide a small CSV file which associates some fake providers with valid patients from the BlueButton sandbox. The database can be automatically migrated and seeded by running the following commands, before starting the Attribution service.

java -jar dpc-attribution/target/dpc-attribution.jar db migrate
java -jar dpc-attribution/target/dpc-attribution.jar seed

Testing the Application

Demo client

The dpc-api component contains a demo command, which illustrates the basic workflow for submitting an export request and modifying an attribution roster. It can be executed with the following command:

java -jar dpc-api/target/dpc-api.jar demo

Note: The demo client expects the entire system (all databases and services) to be running from a new state (no data in the database). This is the default when starting the services from the docker-compose file. When running the JARs manually, the user will need to ensure that the dpc_attribution database is truncated after each run.

The demo performs the following actions:

  1. Makes an export request for a given provider. This request fails because the provider is not registered with the application and has no attributed patients
  2. Generates an attribution roster for the provider using the test_association.csv file.
  3. Resubmits the original export request.
  4. Polls the Job endpoint using the URL returned from the export request and waits for a completed status.
  5. Outputs the download URLs for all files generated by the export request.

Manual testing

The recommended method for testing the Services is with the Postman application. This allows easy visualization of responses, as well as simplifies adding the necessary HTTP Headers.

Steps for testing the data export:

  1. Start the services using either the docker-compose command or through manually running the JARs.

    If running the JARs manually, you will need to migrate and seed the database before continuing.

  2. Make an initial GET request to the following endpoint: http://localhost:3002/v1/Group/3461C774-B48F-11E8-96F8-529269fb1459/$export. This will request a data export for all the patients attribution to provider: 3461C774-B48F-11E8-96F8-529269fb1459. You will need to set the ACCEPT header to application/fhir+json (per the FHIR bulk spec).

  3. The response from the export endpoint should be a 204 with the Content-Location header containing a URL which the user can use to to check the status of the job.

  4. Make a GET request using the URL provided by the /Group endpoint from the previous step. Which has this format: http://localhost:3002/v1/Jobs/{unique UUID of export job}. You will need to ensure that the ACCEPT header is set to application/fhir+json (per the FHIR bulk spec). You will need to ensure that the PREFER header is set to respond-async. The server should return a 204 response until the job has completed. Once the job is complete, the endpoint should return data in the following format (the actual values will be different):

    {
       "transactionTime": 1550868647.776162,
       "request": "http://localhost:3002/v1/Job/de00da66-86cf-4be1-a2a8-0415b21a6a9b",
       "requiresAccessToken": false,
       "output": [
           "http://localhost:3002/v1/Data/de00da66-86cf-4be1-a2a8-0415b21a6a9b.ndjson"
       ],
       "error": []
    }

    The output array contains a list of URLs where the exported files can be downloaded from.

  5. Download the exported files by calling the /Data endpoint with URLs provided. e.g. http://localhost:3002/v1/Data/de00da66-86cf-4be1-a2a8-0415b21a6a9b.ndjson.

  6. Enjoy your glorious ND-JSON formatted FHIR data.

Smoke tests

Smoke tests are provided by Taurus and JMeter. The tests can be run by the environment specific Makefile commands. e.g. make smoke/local will run the smoke tests against the locally running Docker instances.

In order to run the tests, you'll need to ensure that virtualenv is installed.

pip3 install virtualenv

Building the Additional Services

Documentation on building the DPC Website is covered in the specific README.

Building the FHIR implementation guide is detailed here.

You can’t perform that action at this time.