Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mounted sif containers not unmounted after running. #28

Closed
ast0815 opened this issue May 11, 2021 · 3 comments
Closed

Mounted sif containers not unmounted after running. #28

ast0815 opened this issue May 11, 2021 · 3 comments

Comments

@ast0815
Copy link

ast0815 commented May 11, 2021

Version of Singularity

$ singularity --version
singularity version 3.7.3-1.el7

Describe the bug

Singularity does not unmount the sif containers after running. After a while, all loop devices are "used up" and I need to unmount them manually to be able to run again.

To Reproduce

$ mount
...
tmpfs on /run/user/1003 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=796564k,mode=700,uid=1003,gid=1003)
$ singularity run shub://GodloveD/lolcow
INFO:    Use cached image
 ___________________
< Are you a turtle? >
 -------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
$ mount
...
tmpfs on /run/user/1003 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=796564k,mode=700,uid=1003,gid=1003)
/root/.singularity/cache/shub/a59d8de3121579fe9c95ab8af0297c2e3aefd827 on /run/media/root/disk type squashfs (ro,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
$ singularity run shub://GodloveD/lolcow
INFO:    Use cached image
 ________________________________________
/ You will be advanced socially, without \
\ any special effort on your part.       /
 ----------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
$ mount
...
tmpfs on /run/user/1003 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=796564k,mode=700,uid=1003,gid=1003)
/root/.singularity/cache/shub/a59d8de3121579fe9c95ab8af0297c2e3aefd827 on /run/media/root/disk type squashfs (ro,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
/root/.singularity/cache/shub/a59d8de3121579fe9c95ab8af0297c2e3aefd827 on /run/media/root/disk1 type squashfs (ro,nosuid,nodev,relatime,seclabel,uhelper=udisks2)

Expected behavior

The mounted loop devices should be unmounted after the container exits.

OS / Linux Distribution

$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="http://cern.ch/linux/"
BUG_REPORT_URL="http://cern.ch/linux/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Installation Method

Installed via yum.

Additional context

Running sandboxes does not seem to be causing any problems. Only the sif container files.

@ast0815 ast0815 added the bug Something isn't working label May 11, 2021
@dtrudg dtrudg added needs investigation and removed bug Something isn't working labels May 11, 2021
@dtrudg
Copy link
Member

dtrudg commented May 11, 2021

Hi @ast0815,

I've dropped the bug label here as singularity does not mount anything under /run/media/xxxx. That location is usually managed by a Linux Desktop Environment or associated process, which auto-mounts CDs, inserted USB sticks etc.

Is this a CERN specific customization of CentOS 7? Which desktop environment (if any) is used?

@ast0815
Copy link
Author

ast0815 commented May 11, 2021

I think the machine has Gnome installed (interpreting the output of htop, but I only work on it remotely via SSH.

@dtrudg
Copy link
Member

dtrudg commented Jun 14, 2024

I have done development work on various machines running different versions of GNOME and KDE, and haven't seen this issue.

Singularity does not mount anything under /run/media/xxx so the issue must be due to an external process. In the absence of a way to reproduce, I'll close this as there is nothing we can do here.

@dtrudg dtrudg closed this as not planned Won't fix, can't repro, duplicate, stale Jun 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants