Skip to content

HOWTO dev docker_usage_for_GogglesApi

steveoro edited this page Apr 14, 2021 · 7 revisions

HOW-TO: Docker usage with GogglesAPI as example

References

This guide assumes Docker already installed & running.


Index


Getting started: setup and usage as a composed Docker service

Using docker-compose is way more easier than linking individual containers among themselves, although not as versatile.

Most of the docker-compose sub-commands are a mirror copy of their Docker counter part, such as build, logs, stop, exec, ps, images and many more. help will list available commands.

Note the difference between stop, which shuts down the services, while down also removes all created containers & networks, and allows to remove also images and volumes with additional parameters.

up will build any missing container the resulting service may need and reuse any containers already created by a successful build but bound to the same service (a same-named container used & built by another service configuration will create conflicts - see below).

GogglesAPI has 3 different docker-compose.yml configuration files that take care of binding together the containers: 1 for each supported running environment, using an explicit extension.

Each docker-compose file in turn uses a specific Dockerfile with a similar explicit extension (minus the .yml part) just for specifying all the steps needed to create the app container alone (the DB container is based on a pretty standard MySQL/MariaDB image).

Each app container has dedicated names and refers to bespoke DB containers, which, in turn may access different serialized DB data, depending on the running environment or the external volume mounts defined in the configuration.

docker-compose extension Rails environment DB name DB container App container Pub. port
.dev.yml development goggles_development goggles-db.dev goggles-api.dev 8081
.prod.yml production goggles goggles-db.prod goggles-api.prod 8081
.staging.yml staging goggles goggles-db.staging goggles-api.staging 8081

Currently production configuration is the only one that leaves SSL enforced (ON); staging disables it.

Each configuration is mounting:

localhost container
db_data volume (see below) (DB) /var/lib/mysql
gem_cache volume (app) not currently used by goggles_api
node_modules volume (app) not currently used by goggles_api
~/Projects/goggles_api/db/dump (app) /app/db/dump
~/Projects/goggles_api/config/master.key (app) /app/config/master.key

Make sure the explicit paths above correspond to valid files and folders on localhost or the composed service will fail to start or work as expected.

Each named volume (for example, db_data) with be mapped to a local subfolder stored under the volumes directory of your docker installation (typically, /var/lib/docker/volumes/) and separated by service name & volume name (resulting in something like /var/lib/docker/volumes/goggles_api_db_data/_data).

Details can be inspected with a simple docker inspect <CONTAINER_NAME>.

1. Bring up the composed service (app + DB)

To run any configuration, choosing for instance the dev one, use:

$> docker-compose -f docker-compose.dev.yml up

This will run the composed service in the foreground and bind the services to the containers. A simple CTRL-C will stop the service.

If the DB container for the test environment still needs to be created or it's new, the database will either be newly created and empty or even missing at all.

In case the goggles-db.dev service has been previously created by another build setup, the up will fail just before recreating the DB service because the name will result as already taken by another container.

In this case, just remove the offending container:

$> docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                   PORTS               NAMES
6a15cf082761        mariadb:latest      "docker-entrypoint.s…"   2 weeks ago         Exited (0) 2 weeks ago                       goggles-db.dev
$> docker rm 6a15cf082761

In case you need to force the rebuilding of the composed service, just add the --build parameter:

$> docker-compose -f docker-compose.dev.yml up --build

2. Make sure a DB dump is available

Each docker-compose file mounts the explicit db/dump app folder as a volume inside the container.

Obtain a test.sql.bz2 dump from goggles_db (stored under spec/dummy/db/dump) and copy it to your local <PROJECT_ROOT>/db/dump, so that the container will be able to read it.

3. Execute db:rebuild on the container

Open another console and run:

$> docker exec -it goggles-api.dev sh -c 'bundle exec rails db:rebuild from=test to=development'

Then, move to the console of the running service, stop it with CTRL-C and restart it, so that it may clear any caching and reload correctly with a full accessible database.

Bring up the composed service again and test it with something like this (Wiki).


Docker CLI login

In order to push & pull unlimited image tags from the Docker Registry you'll need to be logged-in, because the Docker Registry currently limits the number of anonymous image pulls that can be done during a certain time span.

Logging into Docker from the command line is not required for basic usage & setup but it may be necessary during periods of frequent updates to the base source repo.

The Docker Engine can keep user credentials in an external credentials store, such as the native keychain of the operating system. Using an external store is more secure than storing credentials in the plain-JSON Docker configuration file. (You'll get a warning if you log-in with plain-text credentials.)

In case you don't have a CLI password-manager, you can try pass or use any D-Bus-based alternative (usually under KDE) or the Apple MacOS keychain or the MS Windows Credential Manager.

Under Ubuntu:

  1. Install pass as password manager:
$> sudo apt-get install pass
  1. You'll need the Docker helper that interfaces with the pass commands. For that, download, extract, make executable, and move docker-credential-pass (which is currently at v. 0.6.3):
$> wget https://github.com/docker/docker-credential-helpers/releases/download/v0.6.3/docker-credential-pass-v0.6.3-amd64.tar.gz && tar -xf docker-credential-pass-v0.6.3-amd64.tar.gz && chmod +x docker-credential-pass && sudo mv docker-credential-pass /usr/local/bin/
  1. Check that docker-credential-pass works. To do this, run docker-credential-pass. You should see: "Usage: docker-credential-pass <store|get|erase|list|version>".

  2. Create a new gpg2 key:

$> gpg2 --gen-key
  1. Follow prompts from the gpg2 utility. (Enter actual name, email & passphrase)

  2. Initialize pass using the newly created key:

$> pass init "<Your-actual-name-used-for-GPG>|<GPG-key-ID>"
  1. You'll need to add the credentials store ("credStore": "pass") in $HOME/.docker/config.json to tell the docker engine to use it. The value of the config property should be the suffix of the program to use (i.e. everything after the downloaded helper name docker-credential-).

In our example (using pass as storage & docker-credential-pass as helper) you can create the $HOME/.docker/config.json file manually:

{
  "credsStore": "pass"
}

Alternatively, you can also add the "credStore": "pass" to the docker config using a single sed command:

$> sed -i '0,/{/s/{/{\n\t"credsStore": "pass",/' ~/.docker/config.json
  1. Log out with docker logout if you are currently logged in (this will also remove any previously plain-text credentials from the configuration file).

  2. Login to docker to store the credentials with a simple docker login. Docker will ask for username and password and then you'll need to provide the passphrase used for GPG2.

Alternatively, you can also specify the correct username & omit the password from the command line to use the Docker token for the credentials. When you’re prompted for a password, enter the secret Docker authorization token instead.

(The secret Docker token is currently available only inside our 🔑 config channel on Slack)

$> docker login --username steveoro

In this case, on the first image pull the passphrase for the PGP key will be asked. If you are using a system-wide password manager and store the passphrase, you shouldn't be bothered anymore on any subsequent pull or push.

Ref.:


Setup as individual Docker containers

Beside its "normal" installation as a Rails application or its usage as a composed service (made binding together 1 app container + 1 DB container), the repository includes the necessary Dockerfiles for setting up and building the app container by itself. The same can be achieved for the DB container just by using a standard MySQL/MariaDB base image for building it standalone.

So, instead of a full-blown MySQL/MariaDB installation on localhost, you'd use the just the DB container together with the repository cloned on localhost and acting instead of the containerized app, just by editing the current environment's database.yml and make it access the ports for the DB running inside the container.

In conclusion, using the individual containers for experimenting or testing is definitely feasible, albeit that may result a bit cumbersome if you're new to Docker.

Read on the next chapter for step-by-step instructions.

Additional note regarding DB container usage versus using a native DB installation

At the risk of stating the obvious, using a containerized Client Database referring to a DB which has the same name as an already existing database on localhost (i.e. another database previously created by a native installation of the same client on localhost) is not the same thing: these 2 clients can coexist but they refer to different data files.

The DB container needs to serialize its data on localhost (because, basically, containers are meant to be stateless): this is why it needs an external data folder mounted as a data volume to store the data.

The DB container will refer to a completely different physical database (and also listening on different ports from the ones used by the other localhost DB running concurrently), storing its file inside a different (localhost) folder.

For this reason, remember that the first time the container is built it needs to map to a valid serialization data volume so that these DB files can be reused between runs.


DB container setup & usage ("low level" approach)

Check any already existing local Docker images with docker images.

Make sure you're logged into DockerHub with docker login.

Download & pull the latest MySql/MariaDb container if it's missing. For MySQL:

$> docker pull mysql:latest

Or, for MariaDB:

$> docker pull mariadb:10.3.25

Run or create the container in foreground with:

$> docker run --name goggles-db -e MYSQL_DATABASE=goggles_development \
     -e MYSQL_ROOT_PASSWORD="My-Super-Secret-Pwd" \
     -v ~/Projects/goggles_db.vol:/var/lib/mysql \
     -p 127.0.0.1:33060:3306 mariadb:10.3.25 \
     --character-set-server=utf8mb4 \
     --collation-server=utf8mb4_unicode_ci

Provided that you'll have to:

  • use your own My-Super-Secret-Pwd for the MySql/MariaDb root user (can be anything you want);
  • use goggles or goggles_development as the actual database name, depending on the environment or purpose;
  • use something like ~/Projects/goggles_db.vol as the local data volume to mount; don't forget to create the folder if missing: mkdir ~/Projects/goggles_db.vol;
  • the published port mapping 127.0.0.1:33060:3306 will bind port 3306 of the container to your localhost's 33060. (*)

(*) - Note that the published entry port will be reachable by TCP with an IP:PORT mapping, while any other existing MySQL service already running on localhost will remain accessible though the usual socket PID file. (So you can have both type of databases coexisting.)

For consistency & stability we'll stick with the current MariaDb version as of this writing, tagged 10.3.25.

Eventually (as soon as you'll feel confident with the container setup) you'll want to add a -d parameter to the run statement before the image name for background/detached mode execution. (docker run -d ...)

More precisely, the DB container can be reached from another container using its Docker network name (usually defined inside docker-compose.yml) and its internal port (not the one published on localhost).

Whereas, the same DB container service can be accessed from localhost using the localhost IP (0.0.0.0) with its published port, forcing a TCP/IP connection (not using sockets) with the host parameter.

Check the running containers with:

$> docker ps -a

When in detached mode, you can check the console logs with a:

$> docker logs --follow goggles-db

Stop the container with CTRL-C if running in foreground; or from another console (when in detached mode) with:

$> docker stop goggles-db

In case of need, remove old stopped containers with docker rm CONTAINER_NAME_OR_ID and their images with docker rmi IMAGE_ID.

Existing stopped containers can be restarted easily:

$> docker start goggles-db

Connecting to the DB container with the MySQL client

Assuming the DB container is running, you have two possibilities:

  • Use the mysql client from within the container with an interactive shell:

    $> docker exec -it goggles-db sh -c 'mysql --password="My-Super-Secret-Pwd" --database=goggles_development'
  • Use the mysql client from localhost (if the client is available) and then connect to the service container using the IP protocol and the correct published port:

    $> mysql --host=0.0.0.0 --port=33060 --user=root --password="My-Super-Secret-Pwd" --database=goggles_development

Restoring the whole DB from existing backups

The container serializes its database on the mounted data volume on your workstation. In order to restore its DB you'll need four basic steps:

  1. get the DB dump in SQL format
  2. drop the existing DB when not empty
  3. recreate the DB
  4. execute the script

Step 1:

Assuming we have a compressed dump located at ~/Projects/goggles.docs/backup.db/goggles-backup.20201005.sql.bz2, unzip the DB backup in the current folder:

$ bunzip2 -ck ~/Projects/goggles.docs/backup.db/goggles-backup.20201005.sql.bz2 > goggles-backup.sql

Step 2 & 3:

Drop & recreate from scratch the existing database choosing one method (either way is fine):

  • from within the container:

    $> docker exec -it goggles-db sh -c 'mysql --user=root --password="My-Super-Secret-Pwd" --execute="drop database if exists goggles_development; create database goggles_development;"'
  • from localhost (pure SQL solution, valid if a mysql client is locally available), connect to the service container using the IP protocol and the correct published port:

    $> mysql --host=0.0.0.0 --port=33060 --user=root \
             --password="My-Super-Secret-Pwd" \
             --execute="drop database if exists goggles_development; create database goggles_development;"

Step 4:

If your SQL backup file refers to a single DB (no multiple DB backups) and includes a USE DATABASE <db_name> statement somewhere at the beginning, you'll need to remove that to have a truly DB-independent script (or the following --database= parameter won't have any effect).

Run the SQL script to restore the structure & data (choose your preferred way):

  • From within the container: Create first a symlink to be used by the container to reach the extracted SQL dump file into the mounted data directory with sudo ln goggles-backup.sql ~/Projects/goggles_db.vol/goggles-backup.sql. Then, execute the script from from inside the container's client:

    $> docker exec -it goggles-db sh -c 'mysql --user=root --password="My-Super-Secret-Pwd" --database=goggles_development < /var/lib/mysql/goggles-backup.sql'
  • From localhost (assuming you can run a mysql client from the folder where you expanded the dump file):

    $> mysql --host=0.0.0.0 --port=33060 --database=goggles_development --user=root \
             --password="My-Super-Secret-Pwd" < ./goggles-backup.sql

Both ways will require some time (some minutes) depending on dump size.

When it's done, reconnect to the MySQL client & check that it all went well:

MariaDB [goggles_development]> show tables;

--- snip ---

MariaDB [goggles_development]> desc users;

--- snip ---

Remember to delete the uncompressed dump in the current folder (and its symlink, if created) when done.

Creating new DB backup dumps:

Two possibilities you can choose among:

  • from within the container:

    $> docker exec -it goggles-db sh -c 'mysqldump --user=root --password="My-Super-Secret-Pwd" -l --triggers --routines -i --skip-extended-insert --no-autocommit --single-transaction goggles_development' | bzip2 -c > goggles_development-backup.sql.bz2
  • from localhost (if mysqldump is available):

    $> mysqldump --host=0.0.0.0 --port=33060 --user=root --password="My-Super-Secret-Pwd" \
                 -l --triggers --routines -i --skip-extended-insert \
                 --no-autocommit --single-transaction \
                 goggles_development | bzip2 -c > goggles_development-backup.sql.bz2

If the first method fails (or it halts at the beginning of the dump, as if the DB was empty), usually a dump of all the databases shall do:

$ docker exec -it goggles-db sh -c 'mysqldump --user=root --password="My-Super-Secret-Pwd" -l --triggers --routines -i --skip-extended-insert --no-autocommit --single-transaction goggles_development --all-databases' | bzip2 -c > all_dbs-backup.sql.bz2

Unfortunately, a multi-DB backup created this way cannot be used easily with the procedure shown in "Restoring the whole DB from existing backups", but it works if you just need to backup your data.


API container setup & usage ("low level" approach)

As we've seen, while the DB container is a pretty standard MySQL/MariaDB container using a mounted external data volume, the main app container is custom built and supports different environments.

Already-built images, dedicated to development, staging & production are already available on DockerHub, each one tagged by environment and version. So you may pull any of those in case you don't want to rebuild the image locally.

The latest tag is used just for production. The naming/tag format is "[<REPO_NAME>]:[<TAG_NAME>]".

Existing pre-built images are be pulled automatically from the DockerHub registry if a local copy is not found each time you run a docker-compose up or explicitly do a docker pull <IMAGE_NAME:TAG>.

Remember always to be logged-in on DockerHub (with docker login) otherwise your pull rate may encounter limits.

The local container image(s) can be recreated from scratch each time you update the source code.

Updating a remote image on the registry is a push (see more down below).

In each case, you'll need to specify the "full path" (<DOCKERHUB_USERNAME>/<IMAGE_NAME>:<TAG>) to the actual image you want to build, pull or push so that the referenced image object can be uniquely identified. This "full path name" binds your local image(s) to the actual Docker repository.

(You may use hexadecimal UIDs for referring to local images or local containers, but that won't obviously correspond to anything at all on DockerHub).

Build a new Docker tagged image to test it before release

(Note: this procedure assumes you are about to test locally a new minor or major release and differs from committing changes bit-by-bit for individual or ephimeral builds, when the software life cycle is definitely shorter.)

Assuming you have made useful changes to the repository and want to test also these into a new running container, you'll need to update (rebuild) the image from which the containers are spawn.

To build a new tagged image giving it - for example - a "0.1.1" tag, run:

  • For development:

    $> docker build -t steveoro/goggles-api:dev-0.1.1 \
                    -f Dockerfile.dev .
  • For production:

    $> docker build -t steveoro/goggles-api:prod-0.1.1 \
                    -t steveoro/goggles-api:latest \
                    -f Dockerfile.prod .

The specified Dockerfile will define the context and the environment for the build.

Make sure you have a valid .env file that includes the DB password (customize .env.example before).

Create a new app container ready for use

The create command will set up a new container configuration ahead of time, based on an existing image, so that it is ready to start when you need it.

docker run (see also here) both creates the container and starts it in one step using the console foreground for the log output.

Both create & run create new containers (with their configurations), and require an existing base image (which can be downloaded automatically if the name refers to a remote repository). (If the container name is already used, you can destroy it with rm.)

Assuming we want to create a new production container with a DB running inside another linked container ("everything on containers" scenario), we need to:

  • mount the local master.key file for the credentials
  • set the correct port mapping to export the service on localhost
  • link the container to the same network/service name on which the DB container/service is mapped on. (*)

(*) - (Alternatively, we could build a new docker network on which an existing DB service could be advertised, but we'll skip this case which is out of scope of a typical deploy for this app.)

So, make sure the goggles-db container is up and running with a docker ps -a. If the DB container it's been previously stopped restart it with start. If it's been removed, do a run as highlighted in the previous chapter ("DB container usage"). Having the DB container running will make sure that the default Docker "bridge" network will be already in place, with the DB container on it.

To create a new container from the "latest" tag (as an example), do:

$> docker create --name goggles-api.prod \
                 -p 127.0.0.1:8081:8081 \
                 -e MYSQL_ROOT_PASSWORD="My-Super-Secret-Pwd" \
                 --link goggles-db \
                 -v ~/Projects/goggles_api/config/master.key:/app/config/master.key \
                 steveoro/goggles-api:latest

The --link goggles-db will mount the created container on the same (bridge) network of goggles-db (otherwise the two containers won't be able to communicate with each other).

You can now safely start the container and check the logs for errors using the "follow" option (-f; CTRL-C to exit):

$> docker start goggles-api.prod
goggles-api.prod
$> docker logs -f goggles-api.prod

Stopped (stop) containers can be restarted (start) or removed (rm). exec allows to execute commands on a running container.

If a container is unnamed or cannot be referenced by service name, refer to them using their unique ID (docker ps -a shows them).

Connecting to the API container

Once the service (goggles-api.dev for example) is running, you can:

  • Execute an interactive shell inside the container with:

    $> docker exec -it goggles-api.dev sh
  • Enter directly the Rails console with:

    $> docker exec -it goggles-api.dev sh -c 'bundle exec rails c'
    Loading development environment (Rails 6.0.3.4)
    irb(main):001:0> GogglesDb::User.count
    [...snip...]
    (5.0ms)  SELECT COUNT(*) FROM `users`
    => 661

Test the API service with curl

The API usage flow is typically:

  1. Retrieve a valid JWT for the session, connecting with a User with valid credentials, plus a secret static token.

  2. Use the returned JWT value in the header of each subsequent request, until the JWT expires.

Check out the full API Blueprints stored inside the repository for all details.

Simple example:

  1. Start a new API session:
$> curl -i -X POST -H "Content-Type: application/json" \
        -d '{"e": "steve.alloro@whatever.example", \
            "p": "<VALID_PASSWORD_FOR_THIS_USER>", \
            "t": "<API_STATIC_KEY_INSIDE_CREDENTIALS>"}' \
            "localhost:8081/api/v3/session"
HTTP/1.1 201 Created
Content-Type: application/json
[...snip...]

{"msg":"OK","jwt":"<CORRECT_JWT_VALUE>"}

The <API_STATIC_KEY_INSIDE_CREDENTIALS> is the static token value for the api_static_key stored in your credentials (which can also be seen by opening a Rails console and copying the correct key value from Rails.application.credentials).

See repository credentials: management and creation.

  1. Retrieve Swimmer details (using the session JWT) for swimmer ID 142:
$>  curl -i -X GET -H "Content-Type: application/json" \
         -H 'Authorization: "Bearer <CORRECT_JWT_VALUE>"' \
         "localhost:8081/api/v3/swimmer/142"
HTTP/1.1 200 OK
Content-Type: application/json
[...snip...]

{"id":142,"lock_version":1,"last_name":"ALLORO","first_name":"STEFANO","year_of_birth":1969, [...snip...]

(Repeat step #2 for any other request.)

Updating the API container image

Upon each push to the CI build flow, the latest image gets automatically updated for any successful build.

Other tagged images (ENVIRONMENT-MAJOR.MINOR.BUILD) will yield a rebuild only when a new source release is done on GitHub: the source repo becomes tagged with a MAJOR.MINOR.BUILD version code and a new tar release becomes available.

Be sure to log into DockerHub before any pull or push operation from the Docker repository.

To update a local version of an existing container image, you can either pull it from DockerHub "as it is", or you can create a new image (with build), and tag it with an existing name tag if you want to overwrite it. (Or just use a new tag, if you know what you're doing and just want to create a "manual release".)

Re-tag an existing image with:

$> docker tag local-image:tag_name steveoro/goggles-api:tag_name

Push (and overwrite) the updated tagged image onto the Docker registry with:

$> docker push steveoro/goggles-api:tag_name

Clone this wiki locally