Skip to content

Latest commit

 

History

History
402 lines (307 loc) · 9.08 KB

commands.md

File metadata and controls

402 lines (307 loc) · 9.08 KB

Commands

The following commands are currently supported. Remember, you must be in the present working directory of the compose file to reference the correct instances.

check

To do a sanity check of your singularity-compose.yml, you can use singularity-compose check. Note that this command requires jsonschema, so you should install via:

$ pip install singularity-compose[checks]

or directly pip install jsonschema

$ singularity-compose check
singularity-compose.yml is valid.

$ singularity-compose -f singularity-compose.yml \
          -f singularity-compose.override.yml check
singularity-compose.yml is valid.
singularity-compose.override.yml is valid.

To view the combined compose files you can use --preview.

$ singularity-compose -f singularity-compose.yml \
          -f singularity-compose.override.yml check  --preview

version: '2.0'
instances:
  cvatdb:
    start:
      options:
      - containall
    network:
      enable: false
    volumes:
    - ./recipes/postgres/env.sh:/.singularity.d/env/env.sh
    - ./volumes/postgres/conf:/opt/bitnami/postgresql/conf
    - ./volumes/postgres/tmp:/opt/bitnami/postgresql/tmp
    - /home/vagrant/postgres_data:/bitnami/postgresql
    build:
      context: .
      recipe: ./recipes/postgres/main.def
      options:
      - fakeroot

build

Build will either build a container recipe, or pull a container to the instance folder. In both cases, it's named after the instance so we can easily tell if we've already built or pulled it. This is typically the first step that you are required to do in order to build or pull your recipes. It ensures reproducibility because we ensure the container binary exists first.

$ singularity-compose build

The working directory is the parent folder of the singularity-compose.yml file. If the build requires sudo (if you've defined sections in the config that warrant setting up networking with sudo) the build will instead give you an instruction to run with sudo.

up

If you want to both build and bring them up, you can use "up." Note that for builds that require sudo, this will still stop and ask you to build with sudo.

$ singularity-compose up

resolv.conf

By default, singularity-compose will generate a resolv.conf for you to bind to the container, instead of using the host. It's a simple template that uses Google nameservers:

# This is resolv.conf generated by singularity-compose. It is provided
# to provide Google nameservers. If you don't want to have it generated
# and bound by default, use the up --no-resolv argument.

nameserver 8.8.8.8
nameserver 8.8.4.4

If you want to disable this:

$ singularity-compose up --no-resolv
Creating app

create

Given that you have built your containers with singularity-compose build, you can create your instances as follows:

$ singularity-compose create

Akin to "up," you can also disable the generation of the resolv.conf.

$ singularity-compose create --no-resolv
Creating app

restart

Restart is provided as a convenience to run "down" and then "up." You can specify most of the same arguments as create or up.

$ singularity-compose restart --no-resolv
Stopping app
Creating app

ps

You can list running instances with "ps":

$ singularity-compose ps
INSTANCES  NAME PID     IMAGE
1           app	6659	app.sif
2            db	6788	db.sif
3         nginx	6543	nginx.sif

shell

It's sometimes helpful to peek inside a running instance, either to look at permissions, inspect binds, or manually test running something. You can easily shell inside of a running instance:

$ singularity-compose shell app
Singularity app.sif:~/Documents/Dropbox/Code/singularity/singularity-compose-example>

exec

You can easily execute a command to a running instance:

$ singularity-compose exec app ls /
bin
boot
code
dev
environment
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
singularity
srv
sys
tmp
usr
var

run

If a container has a %runscript section (or a Docker entrypoint/cmd that was converted to one), you can run that script easily:

$ singularity-compose run app

If your container didn't have any kind of runscript, the startscript will be used instead.

down

You can bring one or more instances down (meaning, stopping them) by doing:

$ singularity-compose down
Stopping (instance:nginx)
Stopping (instance:db)
Stopping (instance:app)

To stop a custom set, just specify them:

$ singularity-compose down nginx

It is also possible to specify a timeout (as for singularity instance stop) in order to kill instances after the specified number of seconds:

singularity-compose down -t 100

logs

You can of course view logs for all instances, or just specific named ones:

$ singularity-compose logs --tail 10
nginx ERR
nginx: [emerg] host not found in upstream "uwsgi" in /etc/nginx/conf.d/default.conf:22
2019/06/18 15:41:35 [emerg] 15#15: host not found in upstream "uwsgi" in /etc/nginx/conf.d/default.conf:22
nginx: [emerg] host not found in upstream "uwsgi" in /etc/nginx/conf.d/default.conf:22
2019/06/18 16:04:42 [emerg] 15#15: host not found in upstream "uwsgi" in /etc/nginx/conf.d/default.conf:22
nginx: [emerg] host not found in upstream "uwsgi" in /etc/nginx/conf.d/default.conf:22
2019/06/18 16:50:03 [emerg] 15#15: host not found in upstream "uwsgi" in /etc/nginx/conf.d/default.conf:22
nginx: [emerg] host not found in upstream "uwsgi" in /etc/nginx/conf.d/default.conf:22
2019/06/18 16:51:32 [emerg] 15#15: host not found in upstream "uwsgi" in /etc/nginx/conf.d/default.conf:22
nginx: [emerg] host not found in upstream "uwsgi" in /etc/nginx/conf.d/default.conf:22

config

You can load and validate the configuration file (singularity-compose.yml) and print it to the screen as follows:

$ singularity-compose config
{
    "version": "1.0",
    "instances": {
        "nginx": {
            "build": {
                "context": "./nginx",
                "recipe": "Singularity.nginx"
            },
            "volumes": [
                "./nginx.conf:/etc/nginx/conf.d/default.conf:ro",
                "./uwsgi_params.par:/etc/nginx/uwsgi_params.par:ro",
                ".:/code",
                "./static:/var/www/static",
                "./images:/var/www/images"
            ],
            "volumes_from": [
                "app"
            ],
            "ports": [
                "80"
            ]
        },
        "db": {
            "image": "docker://postgres:9.4",
            "volumes": [
                "db-data:/var/lib/postgresql/data"
            ]
        },
        "app": {
            "build": {
                "context": "./app"
            },
            "volumes": [
                ".:/code",
                "./static:/var/www/static",
                "./images:/var/www/images"
            ],
            "ports": [
                "5000:80"
            ],
            "depends_on": [
                "nginx"
            ]
        }
    }
}

Global arguments

The following arguments are supported for all commands available.

debug

Set logging verbosity to debug.

$ singularity-compose --debug version

This is equivalent to passing --log-level=DEBUG to the CLI.

$ singularity-compose --log-level='DEBUG' version

log_level

Change logging verbosity. Accepted values are: DEBUG, INFO, WARNING, ERROR, CRITICAL

$ singularity-compose --log-level='INFO' version

file

Specify the location of a Compose configuration file

Default value: singularity-compose.yml

Aliases --file, -f.

You can supply multiple -f configuration files. When you supply multiple files, singularity-compose combines them into a single configuration. It builds the configuration in the order you supply the files. Subsequent files override and add to their predecessors.

For example consider this command:

$ singularity-compose -f singularity-compose.yml -f singularity-compose.dev.yml up

The singularity-compose.yml file might specify a webapp instance:

instances:
  webapp:
    image: webapp.sif
    start:
      args: "start-daemon"
    port:
      - "80:80"
    volumes:
      - /mnt/shared_drive/folder:/webapp/data

if the singularity-compose.dev.yml also specifies this same service, any matching fields override the previous files.

instances:
  webapp:
    start:
      args: "start-daemon -debug"
    volumes:
      - /home/user/folder:/webapp/data

The result of the examples above would be translated in runtime to:

instances:
  webapp:
    image: webapp.sif
    start:
      args: "start-daemon -debug"
    port:
      - "80:80"
    volumes:
      - /home/user/folder:/webapp/data

project-name

Specify project name.

Default value: $PWD

Aliases --project-name, -p.

$ singularity-compose --project-name 'my_cool_project' up

project-directory

Specify project working directory

Default value: compose file location

$ singularity-compose --project-directory /home/user/myfolder up

home