Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No way to run persistent containers #43

Closed
briveramelo opened this issue Mar 25, 2021 · 16 comments
Closed

No way to run persistent containers #43

briveramelo opened this issue Mar 25, 2021 · 16 comments

Comments

@briveramelo
Copy link

Continuing from the recent addition to support run, exec, network options #38 (comment), I ran into an issue with running persistent containers.

I am running web service containers, so I have a database container and a web server container that each must run persistently. I would like to run a command like singularity-compose up and have both containers start up to run the web site, but instead the database will start and block the web server from starting. My workaround for now is a small shell script that backgrounds each service one after another with a separate command for each.

Here is a sample singularity-compose.yml for testing. It's not a direct matchup with the above example, but you will encounter the same issue.

version: "2.0"

instances:

  alp1:
    image: alpine/alpine.sif
    ports:
      - "1025:1025"
    start:
      options:
        - e
        - C
        - f
        - w
    exec:
      options:
        - e
        - C
      command: "echo 'alp1 in the house'"
  
  alp2:
    image: alpine/alpine.sif
    ports:
      - "1026:1026"
    start:
      options:
        - e
        - C
        - f
        - w
    exec:
      options:
        - e
        - C
      command: "ping alp1"
    run:
      args:
        - echo
        - 'this will never print because of the perpetual ping'

I wonder if we can provide an option in the yml to run the command in the background. I have tried workarounds like passing cli redirects through the command options, but I have no success. For example

...
exec:
      options:
        - e
        - C
      command: "ping alp1 >& /dev/null &"
...

subprocess.CalledProcessError: Command '['singularity', 'exec', '-e', '-C', 'instance://alp3', 'ping', 'alp2', '>&', '/dev/null', '&']' returned non-zero exit status 1.

So something like this could help

...
exec:
      background: true
...

The places I see this applying are with start, exec, and run, as I believe all 3 can take commands and have the potential to hang up other containers.

This line seems to execute the commands of interest and prevent subsequent actions / containers from running.

Am I missing an alternate way to background these, or is this worth pursuing? I'm open to alternate approaches.

I hope the delay in writing this up hasn't been too long! Thanks again for your help supporting the fakeroot network and other options with the previous issue

@vsoch
Copy link
Member

vsoch commented Mar 25, 2021

@briveramelo for the background issue, I would approach the singularity community or sylabs and ask how it could be done. If there is a way, we can then implement it here.

@briveramelo
Copy link
Author

For the order, you can try depends_on: https://github.com/singularityhub/singularity-compose/blob/f8897e107ab7cbba3c6656486a132aeb0699ad35/scompose/tests/configs/depends_on/singularity-compose.yml

Yes, I am familiar with this. I neglected it in this example, but this is not an issue for me.

@briveramelo for the background issue, I would approach the singularity community or sylabs and ask how it could be done. If there is a way, we can then implement it here.

I did some investigation, and all the examples suggest this redirection method I mentioned above. I'll still ask around if there are alternate methods that might integrate more closely with the singularity cli and report back here

@briveramelo
Copy link
Author

briveramelo commented Mar 28, 2021

@vsoch I tried a suggestion to add the persistent command to the start args. Apparently this runs to the background by default: apptainer/singularity#5891 (comment)

However, this does not appear to be working with singularity compose. I've tried a few different things to test if the start args are ever called. Eg: printing the logs, setting the start args to create a file and then using the exec command to print out the contents of the directory where it should have been created. These tests are showing me the start args are never called.

I think there may be a line missing in the instance create function to calls the start args:

commands = "%s %s %s %s" % (" ".join(options), image, self.name, self.args)
bot.debug("singularity instance start %s" % commands)
self.instance = self.client.instance(
name=self.name,
sudo=self.sudo,
options=options,
image=image,
args=self.args,
)
# If the user has exec defined, exec to it

@briveramelo
Copy link
Author

This should be a sufficient yaml example to test that the start args are called:

singularity-compose.yml

version: "2.0"

instances:

  alp:
    image: alpine/alpine.sif
    start:
      options:
        - e
        - C
        - w
      args: "touch file1.txt"
    exec:
      options:
        - e
        - C
      command: "ls"

And if it works, you should see file1.txt printed out in the list when running
singularity-compose up

@vsoch
Copy link
Member

vsoch commented Mar 28, 2021

Creating the instance has start set to True by default:

https://github.com/singularityhub/singularity-cli/blob/d17ed1e1523b25824261d0aef3de9edb763f0803/spython/instance/__init__.py#L34

So did you add a print to the singularity python start and confirm they are not there?

https://github.com/singularityhub/singularity-cli/blob/master/spython/instance/cmd/start.py

It looks like we don't have the debug printing there, so you likely wouldn't see the command.

@vsoch
Copy link
Member

vsoch commented Mar 28, 2021

I'm not able to get the current singularity-compose master working, I am getting this weird error:

$ singularity-compose up -d
Creating alp
FATAL:   no SIF writable overlay partition found in /home/vanessa/Desktop/Code/singularity-compose-examples/v2.0/start-args/alp.sif

It's working for you?

@vsoch
Copy link
Member

vsoch commented Mar 28, 2021

When I remove the options it's ok.

@vsoch
Copy link
Member

vsoch commented Mar 28, 2021

Oh I see, it was the w for "writable" - my image was not writable.

@vsoch
Copy link
Member

vsoch commented Mar 28, 2021

Okay - here is the full command that it's running.

singularity instance start --bind /home/vanessa/Desktop/Code/singularity-compose-examples/v2.0/start-args/resolv.conf:/etc/resolv.conf --bind /home/vanessa/Desktop/Code/singularity-compose-examples/v2.0/start-args/etc.hosts:/etc/hosts -e -C --net --network none --network-args "IP=10.22.0.2" --hostname alp --writable-tmpfs /home/vanessa/Desktop/Code/singularity-compose-examples/v2.0/start-args/alp.sif alp touch file1.txt

When I run that command verbatim with just singularity, I don't see the file generated just with singularity. So I don't think this is an issue with Singularity compose? Is that the correct way to supply start args?

@vsoch
Copy link
Member

vsoch commented Mar 28, 2021

Here is the PR to singularity-cli if you want to test your commands generated - singularityhub/singularity-cli#175. And then change that line to create the instance to:

self.instance = self.client.instance(
    name=self.name,
    sudo=self.sudo,
    options=options,
    image=image,
    args=self.args,
    quiet=False,
)

and the command should print.

@vsoch
Copy link
Member

vsoch commented Mar 28, 2021

Also, it's been a while, but I think for start args to be passed you have to have a %startscript defined? At least that's how it was originally.

@vsoch
Copy link
Member

vsoch commented Mar 28, 2021

okay, so here is a working example: https://github.com/singularityhub/singularity-compose-examples/tree/master/v2.0/start-args.

The two issues were that:

  1. a container needs a startscript to honor any start args. You might have had one, but since I didn't have your image at first I didn't know to reproduce it.
  2. If you want the file to show up, you need to bind something to the present working directory.

I'll work on adding verbosity for start/stop so the commands show with debug.

@briveramelo
Copy link
Author

So for .sif containers that are built from docker images, I believe the startscript is always defined like this:
"startscript": "#!/bin/sh"

And that seems to be the case with all the containers I've built, which I have verified for the alpine image I've been referencing in this thread.

Is that not sufficient?

As for showing the commands, this has also not been an issue for me. On the current master I can see the start args being printed to the command line when passing the --debug flag. Output below

And I have the instance set to writable, so this is also not the issue.

DEBUG singularity instance start --bind /.../resolv.conf:/etc/resolv.conf --bind /.../etc.hosts:/etc/hosts -e -C -f -w --net --network-args "portmap=1026:1026/tcp" --network-args "IP=10.22.0.3" --hostname alp2 --writable-tmpfs alpine/alpine.sif alp touch file1.txt
WARNING: Skipping mount /etc/localtime [binds]: /etc/localtime doesn't exist in container

DEBUG singularity exec -e -C instance://alp ls
bin

dev

environment

etc

home

lib

media

mnt

opt

proc

root

run

sbin

singularity

srv

sys

tmp

usr

var

I had a hard time pinpointing the definition for this function self.client.instance. I'm guessing there is some conditional logic that is bypassing the start args in there.

@vsoch
Copy link
Member

vsoch commented Mar 28, 2021

self.client.instance is creating a new instance, it's called here and it also calls start by default. It is not bypassing the start args - I tested the example that you provided me, and as I mentioned you need to bind the PWD to somewhere on your host to see the file, and it's simply not going to work if you don't have a startscript that accepts args. The startscript you showed above would need to be:

"startscript": "#!/bin/sh $@"

or similar.

@briveramelo
Copy link
Author

That did it!

It is a bit odd to me that singularity defaults the translation from docker entrypoint to the singularity run command instead of the singularity start.

I didn't need to do any binding btw; it worked fine for me with a writable container.

Because I don't have a singularity definition file and my images start as dockerfiles, I think the best approach for me now is to modify my build workflow to finish by updating the startscript file to either add this line "$@" or copy the contents of the auto-generated runscript to the startscript.

Thanks for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants