Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contents of NFS volumes created by podman volume create not visible from within containers #4248

Closed
toddhpoole opened this issue Oct 13, 2019 · 5 comments · Fixed by #4256
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@toddhpoole
Copy link

toddhpoole commented Oct 13, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
Based on our reading of the 1.6.0 changelog, podman volume create can now "create and mount volumes with options, allowing volumes backed by NFS." When we try to exercise this feature and create an NFS-backed volume, we're then unable to see the contents of that volume from within our containers.

Documentation covering NFS volumes is non-existent, so if this is user error, please advise. Either way, expanding the Examples section of podman-volume-create.1.md and podman-run.1.md with more examples, including several NFS ones, would be helpful.

Steps to reproduce the issue:

  1. Create a volume backed by an NFS filesystem (guessing at the invocation here... again, there are no NFS examples in the documentation to reference).
$ podman volume create --opt type=nfs --opt o=addr=192.168.2.126,rw --opt device=:/exports/test test_nfs_vol
test_nfs_vol
  1. Confirm that podman is aware of the volume.
$ podman volume inspect --all
[
     {
          "Name": "test_nfs_vol",
          "Driver": "local",
          "Mountpoint": "/home/testuser/.local/share/containers/storage/volumes/test_nfs_vol/_data",
          "CreatedAt": "2019-10-12T20:25:25.000893895-07:00",
          "Labels": {
               
          },
          "Scope": "local",
          "Options": {
               
          }
     }
]
  1. Try to run a container with the volume attached:
$ podman run --rm --interactive --tty --volume test_nfs_vol:/mnt/test test_container
  1. Observe that the target directory inside the container is empty:
[root@0452086601a2 /]# ls -al /mnt/test
total 0
drwxr-xr-x. 2 root root  6 Oct 12 20:25 .
drwxr-xr-x. 3 root root 18 Oct 12 20:27 ..

Describe the results you received:
An empty target directory inside the container.

Describe the results you expected:
The volume's contents to be visible in the target directory inside the container.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

$ podman version
Version:            1.6.1
RemoteAPI Version:  1
Go Version:         go1.12.9
OS/Arch:            linux/amd64

Output of podman info --debug:

$ podman info --debug
debug:
  compiler: gc
  git commit: ""
  go version: go1.12.9
  podman version: 1.6.1
host:
  BuildahVersion: 1.11.2
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.1-1.fc30.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.1, commit: 4346fbe0b2634b05857973bdf663598081240374'
  Distribution:
    distribution: fedora
    version: "30"
  MemFree: 31369199616
  MemTotal: 33539690496
  OCIRuntime:
    package: runc-1.0.0-93.dev.gitb9b6cc6.fc30.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: e3b4c1108f7d1bf0d09ab612ea09927d9b59b4e3
      spec: 1.0.1-dev
  SwapFree: 16840126464
  SwapTotal: 16840126464
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: host0
  kernel: 5.2.18-200.fc30.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: slirp4netns-0.4.0-4.git19d199a.fc30.x86_64
    Version: |-
      slirp4netns version 0.4.0-beta.2
      commit: 19d199a6ca424fcf9516320a327cedad85cf4dfb
  uptime: 3h 28m 18.42s (Approximately 0.12 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /home/testuser/.config/containers/storage.conf
  ContainerStore:
    number: 1
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.6.3-2.0.dev.git46c0f8e.fc30.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 0.6.3
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  GraphRoot: /home/testuser/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 2
  RunRoot: /run/user/1000
  VolumePath: /home/testuser/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

$ rpm -q podman
podman-1.6.1-2.fc30.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):
Fresh minimal install of Fedora 30 with yum -y install vim nfs-utils podman buildah.

Exports are visible to host:

$ showmount -e 192.168.2.126
Export list for 192.168.2.126:
/exports/test  192.168.2.0/24

Exports can be mounted outside of podman using mount 192.168.2.126:/exports/test /mnt/test.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 13, 2019
@toddhpoole toddhpoole changed the title Contents of NFS volumes created by podman create volume not visible from within containers Contents of NFS volumes created by podman volume create not visible from within containers Oct 13, 2019
@mheon
Copy link
Member

mheon commented Oct 13, 2019 via email

@toddhpoole
Copy link
Author

No errors when starting the container. We're droped right into our entrypoint as if everything worked. Debug log from container startup:

$ podman --log-level=debug run --rm --interactive --tty --volume test_nfs_vol:/mnt/test test_container
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/testuser/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/testuser/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /home/testuser/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/testuser/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument 
INFO[0000] running as rootless                          
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/testuser/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/testuser/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /home/testuser/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/testuser/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Initializing event backend journald          
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument 
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] parsed reference into "[overlay@/home/testuser/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/library/test_container:latest" 
DEBU[0000] reference "[overlay@/home/testuser/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/library/test_container:latest" does not resolve to an image ID 
DEBU[0000] parsed reference into "[overlay@/home/testuser/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]localhost/test_container:latest" 
DEBU[0000] parsed reference into "[overlay@/home/testuser/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]@200b4408d9bd3d2b4cfdf2645ecbeabd4cb6dff09b48c7801a9058b6e7e9c6c6" 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] exporting opaque data as blob "sha256:200b4408d9bd3d2b4cfdf2645ecbeabd4cb6dff09b48c7801a9058b6e7e9c6c6" 
DEBU[0000] parsed reference into "[overlay@/home/testuser/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]@200b4408d9bd3d2b4cfdf2645ecbeabd4cb6dff09b48c7801a9058b6e7e9c6c6" 
DEBU[0000] exporting opaque data as blob "sha256:200b4408d9bd3d2b4cfdf2645ecbeabd4cb6dff09b48c7801a9058b6e7e9c6c6" 
DEBU[0000] parsed reference into "[overlay@/home/testuser/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]@200b4408d9bd3d2b4cfdf2645ecbeabd4cb6dff09b48c7801a9058b6e7e9c6c6" 
DEBU[0000] User mount test_nfs_vol:/mnt/test options [] 
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Using slirp4netns netmode                    
DEBU[0000] created OCI spec and options for new container 
DEBU[0000] Allocated lock 7 for container 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb 
DEBU[0000] parsed reference into "[overlay@/home/testuser/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]@200b4408d9bd3d2b4cfdf2645ecbeabd4cb6dff09b48c7801a9058b6e7e9c6c6" 
DEBU[0000] exporting opaque data as blob "sha256:200b4408d9bd3d2b4cfdf2645ecbeabd4cb6dff09b48c7801a9058b6e7e9c6c6" 
DEBU[0000] created container "66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb" 
DEBU[0000] container "66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb" has work directory "/home/testuser/.local/share/containers/storage/overlay-containers/66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb/userdata" 
DEBU[0000] container "66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb" has run directory "/run/user/1000/overlay-containers/66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb/userdata" 
DEBU[0000] New container created "66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb" 
DEBU[0000] container "66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb" has CgroupParent "/libpod_parent/libpod-66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb" 
DEBU[0000] Handling terminal attach                     
DEBU[0000] Made network namespace at /run/user/1000/netns/cni-425c0782-966b-5be9-c8fa-d5da3c771e62 for container 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb 
DEBU[0000] overlay: mount_data=lowerdir=/home/testuser/.local/share/containers/storage/overlay/l/ACZ7QWSU7AV4UGL45SVVVE7YLO:/home/testuser/.local/share/containers/storage/overlay/l/FT5FKRIJE7SBANJGUDJWMXTUZY,upperdir=/home/testuser/.local/share/containers/storage/overlay/e1b84444d269b453129b344e1699a5d01ec48874b10d9ac4c0dc019ceaacc606/diff,workdir=/home/testuser/.local/share/containers/storage/overlay/e1b84444d269b453129b344e1699a5d01ec48874b10d9ac4c0dc019ceaacc606/work,context="system_u:object_r:container_file_t:s0:c211,c443" 
DEBU[0000] mounted container "66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb" at "/home/testuser/.local/share/containers/storage/overlay/e1b84444d269b453129b344e1699a5d01ec48874b10d9ac4c0dc019ceaacc606/merged" 
DEBU[0000] Volume test_nfs_vol mount count now at 2     
DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-425c0782-966b-5be9-c8fa-d5da3c771e62 tap0 
DEBU[0000] Created root filesystem for container 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb at /home/testuser/.local/share/containers/storage/overlay/e1b84444d269b453129b344e1699a5d01ec48874b10d9ac4c0dc019ceaacc606/merged 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Created OCI spec for container 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb at /home/testuser/.local/share/containers/storage/overlay-containers/66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb -u 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb -r /usr/bin/runc -b /home/testuser/.local/share/containers/storage/overlay-containers/66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb/userdata -p /run/user/1000/overlay-containers/66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb/userdata/pidfile -l k8s-file:/home/testuser/.local/share/containers/storage/overlay-containers/66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog -t --conmon-pidfile /run/user/1000/overlay-containers/66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/testuser/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb]"
DEBU[0000] Received: 2165                               
INFO[0000] Got Conmon PID as 2154                       
DEBU[0000] Created container 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb in OCI runtime 
DEBU[0000] Attaching to container 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb 
DEBU[0000] connecting to socket /run/user/1000/libpod/tmp/socket/66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb/attach 
DEBU[0000] Starting container 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb with command [/bin/sh -c bash /bin/bash] 
DEBU[0000] Received a resize event: {Width:157 Height:98} 
DEBU[0000] Started container 66c79af68ec8dc4de4c96267624e694065a0fa55aa1fb92d053020f8e90621fb 
DEBU[0000] Enabling signal proxying
[root@66c79af68ec8 /]#

Further, the NFS mount does not appear in the output of mount on the host, nor the container.

@mheon
Copy link
Member

mheon commented Oct 13, 2019 via email

@rhatdan
Copy link
Member

rhatdan commented Oct 14, 2019

Yes we should block any attempts to use volumes requiring mount in rootless mode.

Rootless users are only allowed to mount fuse, bind, sysfs and procfs. All other file systems require SYS_ADMIN capability, IE Root.

@mheon
Copy link
Member

mheon commented Oct 14, 2019

I'm really confused as to how this is running without error. Invoking mount without root consistently exits with a non-zero code in my testing, so we shouldn't be reporting a successful mount.

mheon added a commit to mheon/libpod that referenced this issue Oct 14, 2019
Also, ensure that we don't try to mount them without root - it
appears that it can somehow not error and report that mount was
successful when it clearly did not succeed, which can induce this
case.

We reuse the `--force` flag to indicate that a volume should be
removed even after unmount errors. It seems fairly natural to
expect that --force will remove a volume that is otherwise
presenting problems.

Finally, ignore EINVAL on unmount - if the mount point no longer
exists our job is done.

Fixes: containers#4247
Fixes: containers#4248

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants