OpenLMIS Reference Distribution
Location for the OpenLMIS v3+ Reference Distribution
The Reference Distribution utilizes Docker Compose to gather the published OpenLMIS Docker Images together and launch a running application. These official OpenLMIS images are updated frequently and published to our Docker Hub. These images cover all aspects of OpenLMIS: from server-side Services and infrastructure to the reference UI modules that a client's browser will consume.
The docker-compose files within this repository should be considered the authoritative OpenLMIS Reference Distribution, as well as a template for how OpenLMIS' services and UI modules should be put together in a deployed instance of OpenLMIS following our architecture.
Starting the Reference Distribution
- Docker Engine: 1.12+
- Docker Compose: 1.8+
- docker compose on Windows hasn't supported our development environment setup, so you can use Docker for Windows to run the Reference Distribution, but not to develop
- if you're on a Virtual Machine, finding your correct IP may have some caveats - esp for development
- Copy and configure your settings, edit
BASE_URLto be your IP address (if you're behind a NAT, then don't mistakenly use the router's address), You should only need to do this once, though as this is an actively developed application, you may need to check the environment file template for new additions.
$ cp settings-sample.env settings.env
Note that 'localhost' will not work here—-it must be an actual IP address (like aaa.bbb.yyy.zzz). This is because localhost would be interpreted relative to each container, but providing your workstation's IP address gives an absolute outside location that is reachable from each container. Also note that your BASE_URL will not need the port ":8080" that may be in the environment file template.
- Pull all the services, and bring the Reference Distribution up. Since this is actively developed, you should pull the services frequently.
$ docker-compose pull $ docker-compose up -d # drop the -d here to see console messages
When the application is up and running, you should be able to access the Reference Distribution at:
note if you get a
HTTP 502: Bad Gateway, it is probably still starting up all of the microservice containers. You can wait a few minutes for everything to start. You can also run
docker statsto watch each container using CPU and memory while starting.
By default the demo configuration (facilities, geographies, users, etc) is loaded on startup. To use that demo you may start with a demo account:
Username: administrator Password: password
If you opted not to load the demo data, and instead need a bare-bones account to configure your system, de-activate the demo data and use the bootstrap account:
Username: admin Password: password
If you are configuring a production instance, be sure to secure these accounts ASAP and refer to the Configuration Guide for more about the OpenLMIS setup process.
To stop the application & cleanup:
- if you ran
docker-compose up -d, stop the application with
docker-compose down -v
- if you ran
docker-compose upnote the absence of
-d, then interupt the application with
Ctrl-C, and perform cleanup by removing containers. See our docker cheat sheet for help on manually removing containers.
- if you ran
It's possible to load demo data using an environment variable. This variable is
spring.profiles.active. When this environment has as one of it's
demo-data, then the demo data for the service will be loaded. This
variable may be set in the settings.env file or in your shell with:
$ export spring_profiles_active=demo-data $ docker-compose up -d
Performance data may also be optionally loaded and is defined by some Services. If you'd like to start a demo system with a lot of data, run this script instead of executing step #2 of the Quick Setup.
$ export spring_profiles_active=demo-data $ ./demo-data-start.sh
Refresh Database (Profile)
This deployment profile is used by a few services to help ensure that the database they're working against is in a good state. This profile should be set when:
- Manual updates to the database have been made (INSERT, UPDATE, DELETE) through SQL or another tool other than the HTTP REST API each service exposes.
- When the Release Notes call for it to be run in an upgrade.
Using this profile means that extra checks and updates are performed. This uses extra resources such as memory, cpu, etc. When set, Services will start slower, sometimes significantly slower.
Usually this profile only needs to be set before the service(s) starts once. If no further upgrades or manual database changes are made, the profile may be removed before subsequent starting of the service(s) to quicken startup time.
Docker Compose configuration
The docker-compose.yml file may be customized to change:
- Versions of Services that should be deployed.
- Host ports that should be used for specific Services.
This may be configured in the included .env file or overridden by setting the same variable in the shell.
For example to set the HTTP port to 8080 instead of the default 80:
export OL_HTTP_PORT=8080 ./start-local.sh
A couple conventions:
- The .env file has service versions. See the .env file for more.
- Port mappings have defaults in the docker-compose.yml:
- OL_HTTP_PORT - Host port that the application will be made available.
- OL_FTP_PORT_20 - Host port that the included FTP's port 20 is mapped to.
- OL_FTP_PORT_21 - Host port that the included FTP's port 21 is mapped to.
When a container needs configuration via a file (as opposed to an environment variable for example), then
there is a special Docker image that's built as part of this Reference Distribution from the Dockerfile of
config/ directory. This image, which will also be deployed as a container, is only a vessel for
providing a named volume from which each container may mount the
/config directory in order to self-configure.
To add configuration:
- Create a new directory under
config/. Use a unique and clear name. e.g. kannel.
- Add the configuration files in this directory. e.g.
- Add a COPY statement to
config/Dockerfilewhich copies the configuration file to the container's
COPY kannel/kannel.config /config/kanel/kannel.config.
- Ensure that the container which will use this configuration file mounts the named-volume
kannel: image: ... volumes: - 'service-config:/config'
- Ensure the container uses/copies the configuration file from
- When you add new configuration, or change it, ensure you bring this Reference Distribution with the
docker-compose up --build.
The logging configuration utilizes this method.
NOTE: that the configuration container that's built here doesn't run. It is normal for it's Status to be Exited.
Logging configuration is "passed" to each service as a file (logback.config) through a named docker volume:
service-config. To change the logging configuration:
- bring the application up with
docker-compose up --build. The
--buildoption will re-build the configuration image.
Most logging is collected by way of rsyslog (in the
log container) which writes to the named volume:
However not every docker container logs via rsyslog to this named volume. For these services they log either
via docker logging or to a file for which a named-volume approach works well.
log container runs rsyslog which Services running in their own containers may forward their logging
messages to. This helps centralize all the various Service logging into one location. This container writes
all of these messages to the file
/var/log/messages of the named volume
To read this file, you may mount this filesystem via:
$ docker run -it --rm -v openlmisrefdistro_syslog:/var/log :3 bash > tail /var/log/messages
Log format for Services
The default log format for the Services is below:
<timestamp> <container ID> <thread ID> <log level> <logger / Java class> <log message>
The format from the thread ID onwards can be changed in the
nginx container runs the nginx and consul-template processes. These two log to the named volumes:
e.g to see Nginx's access log:
$ docker run -it --rm -v openlmisrefdistro_nginx-log:/var/log/nginx/log openlmis/dev:3 bash > tail /var/log/nginx/log/access.log
With Nginx it's also possible to use Docker's logging so that both logs are accessible via
docker logs <nginx>.
This is owed to the configuration of the official Nginx image. To use this configuration, change the environment variable
If using the postgres container, the logging is accessible via:
docker logs openlmisrefdistro_db_1.
Cleaning the Database
Sometimes it's useful to drop the database completely, for this there is a script included that is able to do just that.
Note this should never be used in production, nor should it ever be deployed
To run this script, you'll first need the name of the Docker network that the database is using.
If you're using this repository, it's usually the name
openlmisrefdistro_default. With this run
docker run -it --rm --env-file=.env --network=openlmisrefdistro_default -v $(pwd)/cleanDb.sh:/cleanDb.sh openlmis/dev:3 /cleanDb.sh
openlmisrefdistro_default with the proper network name if yours has changed.
Note that using this script against a remote Docker host is possible, though not advised
When deploying the Reference Distribution as a production instance, you'll need to remember to set the following environment variable so the production database isn't first wiped when starting:
export spring_profiles_active="production" docker-compose up --build -d
Documentation is built using Sphinx. Documents from other OpenLMIS repositories are collected and published on readthedocs.org nightly.
Documentation is available at: http://openlmis.readthedocs.io