Metriport helps healthcare organizations access comprehensive patient medical data, through an
open-source universal API.
Learn more »
Docs
·
NPM
·
Developer Dashboard
·
Website
Support us on Product Hunt and Launch YC
Metriport is SOC 2 and HIPAA compliant. Click here to learn more about our security practices.
Our Medical API brings you data from the largest clinical data networks in the country - one open-source API, 300+ million patients.
Metriport ensures clinical accuracy and completeness of medical information, with HL7 FHIR, C-CDA, and PDF formats supported. Through standardizing, de-duplicating, consolidating, and hydrating data with medical code crosswalking, Metriport delivers rich and comprehensive patient data at the point-of-care.
Our Medical Dashboard enables providers to streamline their patient record retrieval process. Get up and running within minutes, accessing the largest health information networks in the country through a user-friendly interface.
Tools like our FHIR explorer and PDF converter help you make sense of the data you need to make relevant care decisions and improve patient outcomes.
A key piece to achieving true interoperability is compatibility between different data formats. Using advanced processing techniques, Metriport's FHIR Converter takes common healthcare data formats such as C-CDA, and converts them into FHIR R4 to streamline data exchange.
Get started converting using our Quickstart Guide.
Check out the links below to get started with Metriport in minutes!
Backend for the Metriport API.
- Dir:
/packages/api
- URL: https://api.metriport.com/
- Sandbox URL: https://api.sandbox.metriport.com/
Engine to convert various healthcara data formats to FHIR, and back.
We use AWS CDK as IaC.
- Dir:
/packages/infra
Our beautiful developer documentation, powered by mintlify ❤️.
- Dir:
/docs
- URL: https://docs.metriport.com/
Our npm packages are available in /packages
:
- Metriport API: contains the Metriport data models, and a convenient API client wrapper.
- CommonWell JWT Maker: CLI to create a JWT for use in CommonWell queries.
- CommonWell SDK: SDK to simplify CommonWell API integration.
Got ideas for how you can make Metriport better? We welcome community contributions!
By making a contribution to this project, you are deemed to have accepted the Developer Certificate of Origin (DCO), agree to GitHub's Community Guidelines, and agree to the Acceptable Use Policies.
Click here to open a new issue - follow the chosen template and you're good to go.
This monorepo uses npm workspaces to manage the packages and execute commands globally.
But not all folders under /packages
are part of the workspace. To see the ones that are, check the
root folder's package.json
under the workspaces
section.
To setup this repository for local development, issue this command on the root folder:
$ npm run init # only needs to be run once
$ npm run build # packages depend on each other, so its best to build/compile them all
Useful commands:
npm run test
: it executes thetest
script on all workspaces;npm run typecheck
: it will runtypecheck
on all workspaces, which checks for typescript compilation/syntax issues;npm run lint-fix
: it will runlint-fix
on all workspaces, which checks for linting issues and automatically fixes the issues it can;npm run prettier-fix
: it will runprettier-fix
on all workspaces, which checks for formatting issues and automatically fixes the issues it can;
This repo uses Semantic Version, and we automate the versioning by using Conventional Commits.
This means all commit messages must be created following a certain standard:
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
To enforce commits follow this pattern, we have a Git hook (using Husky) that verifies commit messages according to the Conventional Commits - it uses commitlint under the hood (config).
Accepted types:
- build
- chore
- ci
- docs
- feat
- fix
- perf
- refactor
- revert
- style
- test
Scope is optional, and we can use one of these, or empty (no scope):
- api
- infra
The footer should have the ticket number supporting the commit:
...
Ref: #<ticket-number>
One can enter the commit message manually and have commitlint
check its content, or use Commitizen's
CLI to guide through building the commit message:
$ npm run commit
In case something goes wrong after you prepare the commit message and you want to retry it after fixing the issue, you can issue this command:
$ npm run commit -- --retry
Commitizen will retry the last commit message you prepared previously. More about this here.
To avoid pushing secrets to the remote git repository we use Gitleaks - triggered by Husky.
From their repository:
Gitleaks is a SAST tool for detecting and preventing hardcoded secrets like passwords, api keys, and tokens in git repos.
It automaticaly scans new commits and interrupts the execution if it finds content that match the configured rules.
Example of report while trying to commit changes:
> metriport@1.0.0 check-secrets
> docker run --rm -v $(pwd):/path zricethezav/gitleaks:v8.17.0 protect --source='/path' --staged --no-banner -v
Finding: ...XXXXXXXXX�[1;3;mAIXXXXXXXX�[0mXXXXXXX/aXXXXXXX...
Secret: �[1;3;mXXXXXXXXXXXXXX�[0m
RuleID: aws-access-token
Entropy: 1.021928
File: packages/core/src/external/cda/__tests__/examples.ts
Line: 69
Fingerprint: packages/core/src/external/cda/__tests__/examples.ts:aws-access-token:69
�[90m2:31AM�[0m �[32mINF�[0m 1 commits scanned.
�[90m2:31AM�[0m �[32mINF�[0m scan completed in 141ms
�[90m2:31AM�[0m �[31mWRN�[0m leaks found: 1
husky - pre-commit hook exited with code 1 (error)
If you're absolutely sure there's no secret on the reported file/line, add the fingerprint to .gitleaksignore
file - that will be ignored and you'll be able to commit.
First, create a local environment file to define your developer keys, and local dev URLs:
$ touch packages/api/.env
$ echo "LOCAL_ACCOUNT_CXID=<YOUR-TESTING-ACCOUNT-ID>" >> packages/api/.env
$ echo "API_URL=http://localhost:8080" >> packages/api/.env
$ echo "FHIR_SERVER_URL=<FHIR-SERVER-URL>" >> packages/api/.env # optional
Additionally, define your System Root OID. This will be the base identifier to represent your system in any medical data you create - such as organizations, facilities, patients, and etc.
Your OID must be registered and assigned by HL7. You can do this here.
By default, OIDs in Metriport are managed according to the recommended standards outlined by HL7.
$ echo "SYSTEM_ROOT_OID=<YOUR-OID>" >> packages/api/.env
These envs are specific to CommonWell and are necessary in sending requests to their platform.
$ echo "CW_TECHNICAL_CONTACT_NAME=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_TECHNICAL_CONTACT_TITLE=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_TECHNICAL_CONTACT_EMAIL=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_TECHNICAL_CONTACT_PHONE=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_GATEWAY_ENDPOINT=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_GATEWAY_AUTHORIZATION_SERVER_ENDPOINT=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_GATEWAY_AUTHORIZATION_CLIENT_ID=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_GATEWAY_AUTHORIZATION_CLIENT_SECRET=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_MEMBER_NAME=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_MEMBER_OID=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_ORG_MANAGEMENT_PRIVATE_KEY=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_ORG_MANAGEMENT_CERTIFICATE=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_MEMBER_PRIVATE_KEY=<YOUR-SECRET>" >> packages/api/.env
$ echo "CW_MEMBER_CERTIFICATE=<YOUR-SECRET>" >> packages/api/.env
The API server reports analytics to PostHog. This is optional.
If you want to set it up, add this to the .env
file:
$ echo "POST_HOG_API_KEY_SECRET=<YOUR-API-KEY>" >> packages/api/.env
The API server reports endpoint usage to an external service. This is optional.
A reachable service that accepts a POST
request to the informed URL with the payload below is required:
{
"cxId": "<the account ID>",
"cxUserId": "<the ID of the user who's data is being requested>"
}
If you want to set it up, add this to the .env
file:
$ echo "USAGE_URL=<YOUR-URL>" > packages/api/.env
Then to run the full back-end stack, use docker-compose to lauch a Postgres container, local instance of DynamoDB, and the Node server itself:
$ cd packages/api
$ npm run start-docker-compose
...or, from the root folder...
$ npm run start-docker-compose -w api
Now, the backend services will be available at:
- API Server:
0.0.0/0:8080
- Postgres:
localhost:5432
- DynamoDB:
localhost:8000
Another option is to have the dependency services running with docker compose and the back-end API running as regular NodeJS process (faster to run and restart); this has the benefit of Docker Desktop managing the services and you likely only need to start the dependencies once.
$ cd packages/api
$ npm run start-dependencies # might be able run it once
$ npm run dev
The API Server uses Sequelize as an ORM, and its migration component to update the DB with changes as the application evolves. It also uses Umzug for programatic migration execution and typing.
When the application runs it automatically executes all migrations located under src/sequelize/migrations
(in ascending order)
before the code is atually executed.
If you need to undo/revert a migration manually, you can use the CLI, which is a wrapper to Umzug's CLI (still under heavy development at the time of this writing).
It requires DB credentials on the environment variable DB_CREDS
(values from docker-compose.dev.yml
, update as needed):
$ export DB_CREDS='{"username":"admin","password":"admin","dbname":"db","engine":"postgres","host":"localhost","port":5432}'
Run the CLI with:
$ npm i -g ts-node # only needs to be run once
$ cd packages/api
$ ts-node src/sequelize/cli
Alternatively, you can use a shortcut for migrations on local environment:
$ npm run db-local -- <cmd>
Note: the double dash
--
is required so parameters after it go to sequelize cli; without it, parameters go tonpm
Umzug's CLI is still in development at the time of this writing, so that's how one uses it:
- it will print the commands being sent to the DB
- followed by the result of the command
- it won't exit by default, you need to
ctrl+c
- the command
up
executes all outstanding migrations - the command
down
reverts one migration at a time
To create new migrations:
- Duplicate a migration file on
./packages/api/src/sequelize/migrations
- Rename the new file so the timestamp is close to the current time - it must be unique, migrations are executed in sorting order
- Edit the migration file to perform the changes you want
up
add changes to the DB (takes it to the new version)down
rolls back changes from the DB (goes back to the previous version)
To do basic UI admin operations on the DynamoDB instance, you can do the following:
$ npm install -g dynamodb-admin # only needs to be run once
$ npm run ddb-admin # admin console will be available at http://localhost:8001/
To kill and clean-up the back-end, hit CTRL + C
a few times, and run the following from the packages/api
directory:
$ docker-compose -f docker-compose.dev.yml down
To debug the backend, you can attach a debugger to the running Docker container by launching the Docker: Attach to Node
configuration in VS Code. Note that this will support hot reloads 🔥🔥!
The ./packages/utils
folder contains utilities that help with the development of this and other opensource Metriport projects:
- mock-webhook: implements the Metriport webhook protocol, can be used by applications integrating with Metriport API as a reference to the behavior expected from these applications when using the webhook feature.
- fhir-uploader: useful to insert synthetic/mock data from Synthea into FHIR servers (see https://github.com/metriport/hapi-fhir-jpaserver).
Check the scripts on the folder's package.json to see how to run these.
Unit tests can be executed with:
$ npm run test
To run integration tests, make sure to check each package/folder README for requirements, but in general they can be executed with:
$ npm run test:e2e
Most endpoints require an API Gateway API Key. You can do it manually on AWS console or programaticaly through AWS CLI or SDK.
To do it manually:
- Login to the AWS console;
- Go to API Gateway;
- Create a Usage Plan if you don't already have one;
- Create an API Key;
- the
value
field must follow this pattern: base 64 of "<KEY>:<UUID>
", where: KEY
is a random key (e.g., generated withnanoid
); andUUID
is the customer ID (more about this on Initialization)
- the
- Add the newly created API Key to a Usage Plan.
Now you can make requests to endpoints that require the an API Key by setting the x-api-key
header.
-
Install AWS CLI and authenticate with it.
-
You'll need to create and configure a deployment config file:
/infra/config/production.ts
. You can seeexample.ts
in the same directory for a sample of what the end result should look like. Optionally, you can setup config files forstaging
andsandbox
deployments, based on your environment needs. Then, proceed with the deployment steps below. -
Configure the Connect Widget environment variables to the subdomain and domain you'll be hosting the API at in the config file:
packages/connect-widget/.env.production
.
- First, deploy the secrets stack. This will setup the secret keys required to run the server using AWS Secrets Manager and create other infra
pre-requisites. To deploy it, run the following commands (with
<config.stackName>
replaced with what you've set in your config file):
$ ./packages/scripts/deploy-infra.sh -e "production" -s "<config.secretsStackName>"
-
After the previous steps are done, define all of the required keys in the AWS console by navigating to the Secrets Manager.
-
Then, to provision the infrastructure needed by the API/back-end execute the following command:
$ ./packages/scripts/deploy-infra.sh -e "production" -s "<config.stackName>"
This will create the infrastructure to run the API, including the ECR repository where the API will be deployed at. Take note of that to populate
the environment variable ECR_REPO_URI
.
- To provision the IHE Gateway:
Update the packages/infra/config/production.ts
configuration file, populating the properties under
iheGateway
with the information from the respective resources created on the previous step
(API Stack).
Execute:
$ ./packages/scripts/deploy-infra.sh -e "production" -s "IHEStack"
This will create the infrastructure to run the IHE Gateway.
- To deploy the API on ECR and restart the ECS service to make use of it:
$ AWS_REGION=xxx ECR_REPO_URI=xxx ECS_CLUSTER=xxx ECS_SERVICE=xxx ./packages/scripts/deploy-api.sh"
where:
- ECR_REPO_URI: The URI of the ECR repository to push the Docker image to (created on the previous step)
- AWS_REGION: The AWS region where the API should be deployed at
- ECS_CLUSTER: The ARN of the ECS cluster containing the service to be restarted upon deployment
- ECS_SERVICE: The ARN of the ECS service to be restarted upon deployment
After deployment, the API will be available at the configured subdomain + domain.
Note: if you need help with the deploy-infra.sh
script at any time, you can run:
$ ./packages/scripts/deploy-infra.sh -h
Distributed under the AGPLv3 License. See LICENSE
for more information.
Copyright © Metriport 2022-present