Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should --cleanenv work for instance start? #5353

Closed
vsoch opened this issue Jun 9, 2020 · 7 comments
Closed

Should --cleanenv work for instance start? #5353

vsoch opened this issue Jun 9, 2020 · 7 comments
Labels

Comments

@vsoch
Copy link
Collaborator

vsoch commented Jun 9, 2020

Version of Singularity:

What version of Singularity are you using?

  • 3.5.2

Expected behavior

I would expect to run

singularity instance start --cleanenv <container> <name> 

and then have the environment cleaned for the start script and all subsequent interactions.

Actual behavior

--cleanenv doesn't seem to do anything, you can shell/exec/otherwise interact with the instance and the host environment is there.

To give you some context, when we start instances with singularity-compose we would want to enforce isolation of the environment, and be sure that our services (when interacted with) don't have any secrets. The user familiar with docker-compose would also expect that. The issue was first brought up here singularityhub/singularity-compose#25 (comment) so if we are unable to support --cleanenv for start, my questions would be:

  1. Why does the variable exist (because it might be confusing that the host environment persists with exec/shell/etc
  2. What other approach can we take to ensure the environment is cleaned when starting instances?

Thank you! I'll put a dummy example to reproduce this below.

Steps to reproduce this behavior

$ singularity instance start --cleanenv docker://busybox martin
INFO:    Using cached SIF image
INFO:    instance started successfully
$ singularity shell instance://martin
Singularity> env | grep TACOS
TACOS=AWESOME

What OS/distro are you running

Ubuntu 18.04

How did you install Singularity

Don't remember, likely a master branch some time ago :)

@dtrudg
Copy link
Contributor

dtrudg commented Jun 9, 2020

The --cleanenv on instance start is applying to the execution of the startscript in that instance, and with shell instance:// then you are starting a new shell associated with the instance namespaces etc. You can shell --cleanenv instance:// to get what you want in that shell.

I do see the argument here that --cleanenv on instance start should infer it on any later interaction. Possibly likewise for other flags that affect the environment, allow GPU use, etc. However, there is also something nice about the current consistency between different invocations of singularity shell against different things (instance, SIF etc.). We'd also need to think about persistence of this configuration somehow if we want it to be inferred on other operations on the instance.

If your singularity-compose expects the cleanenv can it simply set --cleanenv on any interaction with singularity at present?

@vsoch
Copy link
Collaborator Author

vsoch commented Jun 9, 2020

If your singularity-compose expects the cleanenv can it simply set --cleanenv on any interaction with singularity at present?

That would be a reasonable solution, although then the interaction with the instances would need to be limited to singularity-compose commands. @biocyberman could you chime in here about the issue you are having? I thought I understood, but after your recent comment I'm not following - a service container that requires a password to start would need to have that variable from the host after start, and then you could unset it. A user that is able to exec/shell would already have access to their local environment (so shelling inside, if the envar is unset, they would not see it). We could add this --clean-env option to singularity-compose commands, but it would still be possible to use singularity on the host and just override that.

@biocyberman
Copy link

@vsoch Thanks for following up on this. You got it right as you wrote at Expected behavior section up there. Specifically:

have the environment cleaned for the start script and all subsequent interactions

All secret passwords are used only for setting up a service the first time. Therefore singularity-compose is allowed to know the password, but the container should know only the encrypted passwords if any, consider this analogy:

An administrator runs a script called setupUser.sh which reads plaintext password from ./secrets-file file. Only the administrator can access ./secrets-file and it is used once. After that the user can use the account with the password and even change it if necessary. The server saves the encrypted password. Other users cannot see the password.

Now translate to the use case:

  • Administrator ~ developer and administrator of singularity-compose solution .
  • setUser.sh ~ singularity-compose yaml file
  • The server ~ the container
  • ./secrets-file ~ ./secrets-file, preferably not to be mounted or bound to the container.

I realize that some environmental variables need to be kept in the container, something like, "JAVA_HOME" in the old days, or custom addition to $PATH. These can be saved in ./runtime-env-file.

I agree, unset VAR in %startscript or post: section of singularity-compose can be a solution if there is no more elegant way.

@vsoch
Copy link
Collaborator Author

vsoch commented Jun 10, 2020

@biocyberman if the container service (the startscript) needs the password to start the service, I'm not clear how it could be the case that singularity-compose "knows" the password but the container does not. To get the environment in the first place you have to map it into the .singularity.d environment folder, so the only think I can think of to "remove" it after starting is to unset the variable and delete the file. The same is true for docker-compose - there isn't some set of environment variables that docker-compose knows about that are used to start services, but then aren't available in the container. To give the parallel example with Docker, their docker secrets are only available via swarm, which means there is a separate service handling encryption and giving them out. So - I think for this simple example, if you want to start the service and then get rid of secrets, you would need to unset and then remove the environment file.

On the Singularity side, there could be some custom feature where you can --cleanenv at the start script and then it's honored for the rest of interactions, but that still doesn't eliminate your need to unset a variable that is bound and remove the file.

@dtrudg
Copy link
Contributor

dtrudg commented Sep 23, 2020

@vsoch / @biocyberman - would you like to define a feature request on this stuff, or shall I close the issue?

@vsoch
Copy link
Collaborator Author

vsoch commented Sep 23, 2020

I understand the spirit of the issue - we want to start the container with some sets of secrets that don’t endure for later invocations - but I’m not sure how to do that in practice. If the envars were indeed used then unset, and some file bound and removed with some —cleanenv-post flag, subsequent interactions wouldn’t see them, but you would never be able to recover them to restart the container. And would the services started continue to work? Could it work that a password protected file is used and the password required to read the envars at start and then restart? The issue is creeping into different methods to manage secrets, and I’m wondering if this is in Singularity ballpark or should be done with some third party tool (does anyone have suggestions?) @biocyberman what are your thoughts?

@dtrudg
Copy link
Contributor

dtrudg commented Mar 9, 2021

If there are further thoughts on this, please feel free to reopen. A docs issue is tracked at apptainer/singularity-userdocs#383

@dtrudg dtrudg closed this as completed Mar 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants