The concept library is a system for storing, managing, sharing, and documenting clinical code lists in health research.
The specific goals of this work are:
- Store code lists along with metadata that captures important information about quality, author, etc.
- Store version history and provide a way to unambiguously reference a particular version of a code list.
- Allow programmatic interaction with code lists via an API, so that they can be directly used in queries, statistical scripts, etc.
- Provide a mechanism for sharing code lists between projects and organizations.
You can learn more about us here:
- Our live website is available here
- Our documentation is available here
Our goal is to create a system that describes research study designs in a machine-readable format to facilitate rapid study development; higher quality research; easier replication; and sharing of methods between researchers, institutions, and countries.
A significant aspect of research using routinely collected health records is defining how concepts of interest (including conditions, treatments, symptoms, etc.) will be measured. This typically involves identifying sets of clinical codes that map to a variable that the researcher wants to measure, and sometimes a set of rules as well (e.g. a sufferer from a disease may be defined as someone who has a diagnosis code from list A and a medication from list B, but excluding anyone who has a code from list C). A large part of the analysis work may involve consulting clinicians, investigating the data, and creating and testing definitions of clinical concepts to be used.
Often the definitions that are created are of interest to researchers for many studies, but there are barriers to easily sharing them. The definitions may be embedded within study-specific scripts, such that it is not easy to extract the part that may be of general interest. Also, often researchers do not fully document how a concept was created, its precise meaning, limitations, etc. Crucial information may be lost when passing it to other researchers, resulting in mistakes. Often there simply is no mechanism to discover and share work that has been done previously, leading researchers to waste time and resources reinventing the wheel. In theory, when research is published, information on the precise methods used should be included, but in reality this is often inadequate.
- Clone this repository
- Setup with Docker
2.1. Prerequisites
2.1.1. Docker
2.1.2. Running on Apple
2.2. Database Setup
2.2.1. Restore from Local Backup
2.2.2. Restore from Git Repository
2.2.3. Migration only
2.3. Development
2.3.1. Docker Compose Files
2.3.2. Initial Build
2.3.3. Stopping and Starting the Containers
2.3.4. Live Working
2.3.5. Removing the Containers
2.3.6. Local Pre-production Builds
2.3.7. Impact of Environment Variables
2.4. Accessing and Exporting the Database
2.4.1. Access/Export with PGAdmin4
2.4.2. Access/Export with CLI
2.5. Debugging and Running Tests
2.5.1. Django Logging
2.5.2. Debug Tools in Visual Studio Code
2.5.3. Running Tests
2.6. Setting up VSCode Tasks
2.6.1. Basics
2.6.2. Debug Build Tasks
2.6.3. Test Build Tasks
2.6.4. How to Handle Cleaning
2.7. Creating a Superuser - Setup without Docker
3.1. Prerequisites
3.2. Installing
3.2.1. Cloning the Concept Library
3.2.2. Install virtualenv and virtualenvwrapper
3.2.3. Database set up
3.2.4. Installing LDAP functionality
3.2.5. Administration area
3.3. Using Eclipse
3.4. Running Tests - Deployment
4.1. Deploy Scripts
4.1.1. Manual Deployment
4.1.2. Automated Deployment
4.2. Harbor-driven CI/CD Pipeline - API and Packages
5.1. Clients
5.1.1. What are Clients?
5.1.2. Available Packages
5.2. API
To download this repository:
- Ensure that you have installed Git (e.g. Git for Windows).
- Open a terminal
- Navigate to the folder you want to clone this repository into
- Run the command:
git clone https://github.com/SwanseaUniversityMedical/concept-library.git
Please ensure that you have installed Docker Desktop v4.10.1 or Docker Engine v20.10.17.
If you encounter any issues, please see Docker's documentation (https://docs.docker.com/).
The app container requires emulation for ARM CPUs, please install Rosetta 2:
- Open a terminal
- Run:
softwareupdate --install-rosetta
[!] Note: Do not share the backup files with anyone
To restore from a local backup:
- Navigate to the
concept-library/docker/development
folder - Place a
.backup
file inside of thedb
folder - Skip to 2.3. Development
[!] Note: Do not share these files with anyone
[!] Note: The initial run of the application may take a while if you are using this method, however, subsequent builds will be faster as the backup is saved locally in the
concept-library/docker/development/db/
folder
To restore from a Git repository:
- Create a personal access token on GitHub (https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token), ensure it grants access to private repositories
- Navigate to the
concept-library/docker/development/
folder - Duplicate the
example.git.token
inside ofdevelopment/db/
- Rename the duplicated file to
git.token
- Delete the contents of the file and paste your personal access token
- Open the
postgres.compose.env
file inside of thedocker/development/env
folder - Ensure that the environment variable
POSTGRES_RESTORE_REPO
is set to the correct GitHub repository where your.backup
file is stored - Skip to 2.3. Development
If you do not have a backup available the application will still run successfully as migrations are automatically applied, however, no data will be restored. Please skip to 2.3. Development.
With an empty database, you will need to run statistics manually for the application to work correctly:
- After following the steps to start the application in 2.3. Development
- Navigate to 127.0.0.1/admin/run-stats
Within the concept-library/docker/
directory you will find the following docker-compose files:
docker-compose.dev.yaml
- This is the development docker container used to iterate on the Concept Library.
- After building, the application can be located at http://127.0.0.1:8000
docker-compose.test.yaml
- This compose file builds an environment that better reflects the production environment, serving the application via Apache, and includes adjunct services like Redis, Celery and Mailhog.
- It is recommended for use when developing the Docker images, or as a pre-production test when modifying build behaviour such as offline compression.
- After building, the application can be located at http://localhost:8005
docker-compose.prod.yaml
- This compose file builds the production container.
- It is used for both manual and automated deployment via CI/CD workflows
- After building, the application can be located at https://conceptlibrary.some-demo-app.saildatabank.com where
some-demo-app
describes the development sub-domain
To perform the initial build and run of the application:
- Open a terminal
- Navgiate to the
concept-library/docker/
folder - In the terminal, run
docker-compose -p cll -f docker-compose.dev.yaml up --build
(append-d
as an argument to run in background)
The application and database will be available at:
- Application:
127.0.0.1:8000
- Database:
127.0.0.1:5432
To stop the docker container:
- If you have a terminal open which is running the docker containers, press
CTRL + C
orCTRL + Z
to stop the containers - If you do not have a terminal open which is running the containers:
a. Open a terminal
b. Navigate to theconcept-library/docker/
folder
c. In the terminal, rundocker-compose -p cll -f docker-compose.dev.yaml down
To start the docker container (if it has already been built and has stopped for any reason):
- Open a terminal
- Navigate to the
concept-library/docker/
folder - In the terminal, run
docker-compose -p cll -f docker-compose.dev.yaml start
Whilst working on the codebase, any changes should be automatically applied to the codebase stored in the app container after saving the file.
If you make any changes to the models you will need to:
- Stop and start the containers again with
docker-compose -p cll -f docker-compose.dev.yaml up --build
, the migrations will be automatically applied - OR; execute the migration code from within the app container (see: https://docs.docker.com/engine/reference/commandline/exec/)
To remove the containers:
- Open a terminal
- Navigate to the
concept-library/docker/
folder - In the terminal, run:
a.docker compose down
: removes networks and containers.
b. OR;docker-compose -p cll -f docker-compose.dev.yaml down --rmi all -v
: removes networks, containers, images and volumes.
c. OR; to prune your docker, enterdocker system prune -a
[!] Note:
To test the transpiling, minification or compression steps, OR; if you have made changes to the Docker container or its images it is recommended that you run a local, pre-production build
The test docker compose has several profiles that can be used to set up your environment:
live
- this starts both the celery and mailhog servicesemail
- this starts the mailhog service only
[!] Note: If you do not want to start the celery services you can remove the "--profile live" argument
To build a local, pre-production build:
- Open a terminal
- Follow the steps above if you have not already built the images
- Navigate to the
concept-library/docker/
folder - Set up the environment variables within
./test/app.compose.env
- In the terminal, run
docker build -f test/app.Dockerfile -t cll/app --build-arg server_name=localhost ..
- Once the image is built, run
docker tag cll/app cll/celery_beat; docker tag cll/app cll/celery_worker
- Finally, run
docker-compose -p cll -f docker-compose.test.yaml --profile live up
(append-d
as an argument to run in background) - Open a browser and navigate to
localhost:8005
to access the application
[!] Note: To use the mailhog service, you will have to run --profile live or --profile email
If you would like to learn more about Mailhog, please visit this site. Otherwise, to start Mailhog:
- Start the container as described above
- Head to http://localhost:8025
- Any outgoing emails sent from the application will be visible here
[!] Note:
To modify the environment variables, please navigate to./docker/test/app.compose.env
(or the appropriate folder for the container you are building)
Some environment variables modify the behaviour of the application.
The following are important to consider when modifying app.compose.env
:
DEBUG
→ When this flag is set toTrue
:- The application will expect a Redis service to be running for use as the cache backend, otherwise it will use a DummyCache
- The application will enable the compressor and precompilers, otherwise this will not take place (aside from HTML Minification)
IS_DEVELOPMENT_PC
→ When this flag is set toFalse
:- The application will use both LDAP and User model authentication, otherwise only the latter will be used
- The application will use a different logging backend - please see
settings.py
for more information
Some environment variables modify the behaviour of the container when building, you should be aware of this behaviour when building docker-compose.prod.yaml
and docker-compose.test.yaml
- this behaviour is mostly defined within init-app.sh
.
The following are important to consider when modifying app.compose.env
:
IS_DEVELOPMENT_PC
→ When this flag is set toTrue
:- The application and celery services will await the postgres service to initialise before continuing
CLL_READ_ONLY
→ When this flag is set toFalse
:- The application will not run the
makemigrations
andmigrate
commands on startup
- The application will not run the
DEBUG
→ This flag determines static collection behaviour:- If set to
True
it will compile, transpile and compress static resources - If set to
False
it will only collect the static resources
- If set to
To learn about the impact of the other environment variables, please open and examine ./cll/settings.py
.
[!] Note:
If you have made changes to the environment variables in the docker-compose.dev.yaml file you will need to match those changes when connecting through the CLI or PGAdmin4
Please ensure you have installed PGAdmin4 and then:
- Open PGAdmin4
- Right-click the
Servers
object in the browser and clickRegister > Server...
- In the
General
tab, enter a name for the server, e.g.docker-concept-library
- In the
Connection
tab, enter:Host
: 127.0.0.1Port
: 5432Username
: clluserPassword
: password
- Click save, the connection should now be visible in the browser
- Ensure the Docker container is running
- Open PGAdmin4
- Connect to the
docker-concept-library
server - Right click the
concept_library
database and clickBackup...
- In the filename input field, enter the directory and name to save the backup file as. Ensure you save the file as a
.backup
- Click the
Backup
button
[!] Note: The query will fail to retrieve results if you forget the semicolon,
;
, at the end of the query
- Open a terminal
- In the terminal, run:
docker exec -it cll-postgres-1 /bin/bash
- Query the database:
a. Initiate an active session withpsql -U clluser concept_library
and then run queries directly, e.g.SELECT * FROM CLINICALCODE_PHENOTYPES LIMIT 1;
b. OR; run a query directly withpsql -U clluser -d concept_library 'SELECT * FROM CLINICALCODE_PHENOTYPES LIMIT 1;'
- Open a terminal
- In the terminal, run:
docker exec -it cll-postgres-1 /bin/bash
- Replace
[filename]
with the file name desired and run:
pg_dump -U postgres -F c concept_library > [filename].backup
Django logging is enabled by default, you can view the logs in the terminal used to start the docker container.
To disable the verbose logging:
- In docker-compose.dev.yaml set
tty: false
under theapp
service - In docker-compose.dev.yaml set
DEBUG: false
under theenvironment
section of theapp
service
Before continuing, open the docker-compose.dev.yaml
file and ensure the DEBUG_TOOLS
variable in the app
container definition is set to true.
Create a run configuration for the project:
- Create a new folder and name it
.vscode
- Create a new file within that folder and name it
launch.json
- Paste the json below into the new file and then save the file
{
"configurations": [
{
"name": "Debug Application",
"type": "python",
"request": "attach",
"pathMappings": [
{
"localRoot": "${workspaceFolder}/CodeListLibrary_project",
"remoteRoot": "/var/www/CodeListLibrary_project"
}
],
"port": 8000,
"host": "127.0.0.1"
}
]
}
Now you're ready to start debugging:
- Build the container
docker-compose -p cll -f docker-compose.dev.yaml up --build
and ensure it is running - Add a breakpoint to the file that you are debugging
- In Visual Studio Code, open the
Run and Debug Menu
by clicking the icon on the left-hand side of the screen or using the hotkeyCTRL + SHIFT + D
- At the top of the debug menu, select the
Debug Application
option - Press the run button and start debugging
Variables, Watch and Callstack can all be viewed in the Run and Debug
menu panel and the console can be viewed in the Debug Console
(hotkey: CTRL + SHIFT + Y
) window.
[!] Todo: Needs documentation once we implement & finalise new test suite
[Details]
[!] Note: You can learn more about using external tools and VSCode's Tasks system here
To start using tasks:
- Open your terminal
- Navigate to the root of the
concept-library
project - Create a new
.vscode
directory within the project folder by running:mkdir .vscode
- Navigate into this directory by running:
cd .vscode
- Create a new
tasks.json
file by running:touch tasks.json
After opening the tasks.json
file, you should configure the contents so it looks like this:
{
"version": "2.0.0",
"tasks": []
}
[!] Note: You can learn about the available options for tasks here
To set up your first debug task, configure your tasks.json
file such that:
{
"version": "2.0.0",
"tasks": [
{
"label": "Build Debug",
"detail": "Builds the development container",
"type": "shell",
"command": "docker-compose -p cll -f docker-compose.dev.yaml up --build",
"options": {
"cwd": "${workspaceFolder}/docker/"
},
"group": {
"kind": "build",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new",
"focus": false
}
}
]
}
To set up a task for the docker-compose.test.yaml
container, you append the following to the "tasks": []
property:
{
"label": "Build Test",
"detail": "Builds the test container",
"type": "shell",
"command": "docker build -f test/app.Dockerfile -t cll/app --build-arg server_name=localhost ..; docker tag cll/app cll/celery_beat; docker tag cll/app cll/celery_worker; docker-compose -p cll -f docker-compose.test.yaml up",
"options": {
"cwd": "${workspaceFolder}/docker/"
},
"group": {
"kind": "test"
},
"presentation": {
"reveal": "always",
"panel": "new",
"focus": false
}
}
[!] Note: There will be some differences between Windows and other operating systems. The example below is set up to use PowerShell logical operators. On a Linux-based OS, you would need to use the '&&' and '||' operators instead of '-and' and '-or'
If you set up both the debug and test builds you will note that the docker container isn't cleaned between different tasks. It is possible to set up your tasks such that the containers will be cleaned.
To set this up, you would need to append the following cleaning task to your tasks
property:
{
"label": "Clean Containers",
"detail": "Cleans all cll related containers",
"type": "shell",
"command": "(docker ps -q --filter 'name=cll') -and (docker rm $(docker stop $(docker ps -q -f 'name=cll' -f 'name=redis'))) -or (echo 'Nothing to clean')",
"options": {
"cwd": "${workspaceFolder}/docker/"
},
"group": {
"kind": "build"
},
"presentation": {
"reveal": "never",
"panel": "shared"
},
"problemMatcher": []
},
Using Compound Tasks, you can modify your Build Debug
and Build Test
tasks to clean before starting by adding the dependsOn
property. In the case of the Build Debug
task, it would look like this:
{
"label": "Build Debug",
"detail": "Builds the development container",
"type": "shell",
"command": "docker-compose -p cll -f docker-compose.dev.yaml up --build",
"options": {
"cwd": "${workspaceFolder}/docker/"
},
"group": {
"kind": "build",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new",
"focus": false
},
"dependsOn": ["Clean Containers"]
},
To create a superuser:
- Ensure the docker container is running and open a new terminal
- Run
docker exec -it cll-app-1 /bin/bash
(see below if this doesn't work) - Navigate to the CodeListLibrary_project directory by running:
cd /var/www/CodeListLibrary_project
- Run
python manage.py createsuperuser
and follow the instructions in the terminal to create the user - Verify that the user was created properly by navigating to the website and logging in with the credentials entered
If you you are unable to exec
into cll-app-1
:
- Run
docker ps -a
in the terminal - Look for the Concept Library's
app
container and copy itsCONTAINER ID
- Run the same command using the
CONTAINER ID
, e.g.docker exec -it 82508ae4ef /bin/bash
- Continue with Step (3) above
[!] Note: Unlike 2. Setup with Docker, this method of setting up and using the Concept Library is not recommended. Containerisation is a much more suitable method if you intend to develop or host the Concept Library application.
If you decide to continue, please note that we would not be able to offer advice outside of what is detailed below - please take this into consideration when deciding which method you would like to use.
Please ensure that you have the following installed:
To clone the repository:
- Open the terminal
- Navigate to an appropriate directory
- Run the following command:
git clone https://github.com/SwanseaUniversityMedical/concept-library.git
- Checkout the branch you would like to work on, e.g. run the following to work on Master:
git checkout master
This will provide a dedicated environment for each project you create. It is considered best practice and will save time when you’re ready to deploy your project.
- Open the terminal
- Run the following command:
pip install virtualenvwrapper-win
- Now navigate to the directory to where you have downloaded the project e.g.
cd C:/Dev/concept-library
- To create a virtualenv you should run the following command:
mkvirtualenv cclproject
- To work on this environment, run:
workon cclproject
- To install the required packages, run the following command:
pip install -r docker/requirements/local.txt
- To stop working on this environment, run:
deactivate cclproject
[!] Note: Please note the following if you are a Concept Library Developer:
To retrieve a database backup, follow some of the steps in 2.2.2. Restore from Git Repository and download thedb.backup
file - this can be used to restore the Postgres db during the following steps.
- Install Postgres and PGAdmin on your device.
- Within PGAdmin3, do the following:
- Create a role called
clluser
- Create a database called
code_list_library
- Create a read-only role
- Create a role called
- When running the application it may complain that you have unapplied migrations; your app may not work properly until they are applied. To do this:
- Navigate to
concept-library/CodeListLibrary_project/cll
- Run:
python manage.py makemigrations
- Finally, run:
python manage.py migrate
- Navigate to
- To run the application:
- Navigate to
concept-library/CodeListLibrary_project/cll
- Run the following:
python manage.py runserver 0.0.0.0:8000
- Navigate to
- You can now access the server on http://127.0.0.1:8000/admin/
- To stop the server, press
CTRL + C
orCTRL + Z
within the terminal
For Windows machines:
- You will need to install the Microsoft Visual C++ Compiler for Python. This can be found here
- Download the
python_ldap
wheel, located here - Once downloaded, activate your virtualenv and run the following
pip install path/to/the/file/python_ldap.whl
- Once installed, you can run the
pip install django-auth-ldap
command. See LDAP installation reference here - If you intend to use LDAP over SSL, please take a look at the troubleshooting guide found here
When you first start the application there will be no users within your database. You will first need to create a superuser account in order to access the administration site.
- Open the terminal and run the following:
python manage.py createsuperuser
- Fill in the desired username, email and password
- When the development server is running you can access the admin section by going to the following url: http://127.0.0.1:8000/admin/
- Navigate to the
File
button within Eclipse's toolbar, then selectOpen projects from file system
- Browse to the Concept Library folder, e.g.
C:/Dev/concept-library
- Assuming you have followed the previous steps to create a virtual env, you will need to point Eclipse's python interpreter to the virtual env:
- Select the
Window
button within your toolbar and openPreferences
- Select
PyDev -> Interpreters -> Python Interpreter
and selectNew
- Follow the interpreter wizard (e.g. enter the name), then browse to the Python executable (as set in your system environment %PATH% variable)
- Select each of the folders you want added to your python path
- Right click the Concept Library project and select
Debug as...
and choose the python development interpreter
- Select the
- You should now see that the server is live at http://127.0.0.1:8000/admin/
[!] Todo: Needs documentation once we implement & finalise new test suite
[Details]
[!] Note: These instructions only pertain to feature branches which are not covered by the CI/CD workflow
This script can be used to manually deploy feature branches on the server. Please note that you will have to either (a) modify the script to use the appropriate directories and settings, or (b) pass arguments to the script to ensure it runs correctly.
Optional arguments for this script include:
Command | Shorthand | Default value | Description |
---|---|---|---|
--file-path |
-fp |
/root/deploy_DEV_DEMO_DT |
Determines the root path of your environment variable text file (see below) |
--foreground |
-fg |
false |
Whether the containers will be built in the foreground |
--no-pull |
-nd |
true |
Whether to pull the branch from the Git repository |
--no-clean |
-nc |
true |
Whether to clean unused docker containers/images/networks/volumes/build caches |
--env |
-e |
env_vars.txt |
Name of the environment variables text file |
--file |
-f |
docker-compose.prod.yaml |
Name of the docker-compose file you would like to deploy |
--name |
-n |
cllro_dev |
Name of the docker container |
--repo |
-r |
Repo | Github repository you would like to pull from |
--branch |
-b |
DFTM |
Repo's branch you would like to pull from |
--profile |
-p |
live |
Name of the docker profile to execute |
[!] Note: This file should be present within the
$RootPath
as described above (modified by passing-fp [path]
to the deployment script)
This process should be automatic assuming you have ensured that you have an env-vars.txt
file in your server's directory. The name of this file usually includes a suffix to describe the server's status, e.g. -FA
for full-access servers or -RO
for read-only servers.
During manual deployment, the file will be copied and renamed to env_vars.txt
for use by docker-compose.prod.yaml
within ./concept-library/CodeListLibrary_project/docker/
after the repository is cloned by the deploy-feature.sh
script.
- SSH into the server
- Skip this step if you have already created the
deploy-feature.sh
within this server:- Please clone the Github repository and copy/move it into a directory of your choosing (in this case, we will assume it's within /root/)
- Ensure the
deploy-feature.sh
script has the appropriate permissions - Within your terminal, run the following:
/root/deploy-feature.sh
- Apply any parameters you would like to add e.g.
/root/deploy-feature.sh --repo Dynamic-Template-Feature-Master
- OR; simply edit the variables within the
deploy-feature.sh
script
- Apply any parameters you would like to add e.g.
- Await the successful build
- You should now be able to visit the site on the appropriate domain
[!] Todo: Needs documentation once we move from Gitlab CI/CD -> Harbor
[Details]
[!] Note: The env_file has to (1) be in the same directory as the compose file and (2) be set within the docker-compose.prod.yaml file
[!] Note:
/root/
in this case describes the the directory of your choosing
If not already present on the machine, please ensure that the following files are within the root directory:
- Copy
./docker/production/scripts/deploy-site.sh
to/root/
- Copy
./docker/docker-compose.prod.yaml
to/root/
If you wish, you can now edit the deploy-site.sh
to set up any variables that may differ from the other servers. Please see the table below for more information regarding variables and commands that can be passed to deploy-site.sh
.
You need to ensure that there is an env_vars.txt
within the same directory as the /root/
directory where your docker-compose.prod.yaml
is found.
Optional parameters for the deploy-site.sh
script include:
Command | Shorthand | Default value | Description |
---|---|---|---|
--file-path |
-fp |
/root/deploy_DEV_DEMO_DT |
Determines the root path of where the docker-compose.prod.yaml file lives |
--foreground |
-fg |
false |
Whether the containers will be built in the foreground |
--no-clean |
-nc |
true |
Whether to clean unused docker containers/images/networks/volumes/build caches |
--address |
-a |
Harbor registry URL | Determines the registry we will try to pull the images from |
--file |
-f |
docker-compose.prod.yaml |
Name of the docker-compose file you would like to deploy |
--profile |
-p |
live |
Name of the docker profile to execute |
[!] Todo: Needs updating after moving to automated, Harbor-driven CI/CD pipeline
Images will be automatically built via Gitlab CI/CD from the master
branch when a merge is committed. These images can be pulled using the deploy-site.sh
script as described in 4.1.2. Automated Deployment.
When automated deployment is disabled, which may be the case for certain servers, you can still deploy the images being built by the CI/CD pipeline.
To do so manually, please do the following:
- Open the terminal and SSH into the server
cd
to the/root/
directory of the server you are deploying (e.g./root/deploy_DEV_DEMO_DT
)- Copy the
./docker/production/scripts/deploy-site.sh
and./docker/docker-compose.prod.yaml
files to this directory (you can do this by pulling them from the Github repository) - Ensure you have a
.txt
file namedenv_vars.txt
within the same directory as these files - Ensure you are logged in, e.g.
docker login {details}
- if you are SSHing into a live server, this step will have already been completed by our config(s) - Run the following command
./root/{directory}/deploy-site.sh --address {registry_address}
where the{registry_address}
describes the address where the Gitlab images are uploaded (check out.gitlab-ci.yml
for more information)
[!] Todo: Needs documentation once we move from Gitlab CI/CD -> Harbor and have set up automated deployment
[Details]
We maintain client packages that can be used to interface with the Concept Library. These packages are intended to make it easier for you to get started using the Concept Library, they implement several features to reduce your technical burden, such as allowing you to submit Phenotypes using a human-readable YAML template.
Under the hood, these packages call our API endpoints - you can read more about these in 5.2. API. However, we anticipate that beginners may feel more comfortable using one of the following packages.
- Concept Library Client - an implementation of the API client for the Concept Library in R
- pyconceptlibraryclient - a Python API client for the Concept Library
If you would like to interface with the API without the aid of our client packages we have documented our API using Swagger. The Swagger documentation is available here.
Please refer to our reference data, which can be found here, for fields that described by their identifier.