Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NFS volumes created using podman volume create unavailable in container #4303

Closed
toddhpoole opened this issue Oct 19, 2019 · 9 comments · Fixed by #4305
Closed

NFS volumes created using podman volume create unavailable in container #4303

toddhpoole opened this issue Oct 19, 2019 · 9 comments · Fixed by #4305
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@toddhpoole
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
After filing bug reports #4249, #4248, and #4247, and seeing all 3 resolved by the release of 1.6.2, we resumed trying to use podman volume create to create and mount NFS-backed volumes as originally announced in 1.6.1.

After re-creating an NFS-backed volume using 1.6.2, we're still unable to see the contents of that volume from within our containers.

Documentation covering NFS volumes is non-existent, so we cannot tell if this is user error or broken code. As a part of resolving this issue, please consider adding NFS test cases to your build pipeline and expanding the Examples section of podman-volume-create.1.md and/or podman-run.1.md with more examples, including NFS ones.

Steps to reproduce the issue:

  1. Create a volume backed by an NFS filesystem (guessing at the invocation here... again, there are no NFS examples in the documentation to reference).
[root@testhost ~]# podman volume create --opt type=nfs --opt o=addr=192.168.2.126,rw --opt device=:/exports/test/ podman_vol_create_test1
podman_vol_create_test1
  1. Confirm that podman is aware of the volume.
[root@testhost ~]# podman volume ls
DRIVER   VOLUME NAME
local    podman_vol_create_test1
[root@testhost ~]# podman volume inspect podman_vol_create_test1
[
     {
          "Name": "podman_vol_create_test1",
          "Driver": "local",
          "Mountpoint": "/var/lib/containers/storage/volumes/podman_vol_create_test1/_data",
          "CreatedAt": "2019-10-19T14:58:37.374975029-07:00",
          "Labels": {
               
          },
          "Scope": "local",
          "Options": {
               
          }
     }
]
  1. Run a container with the volume attached:
[root@testhost ~]# podman --log-level=debug run --rm --interactive --tty --volume podman_vol_create_test1:/test_mount_location_inside_container1 test_container_root
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/lib/containers/storage 
DEBU[0000] Using run root /var/run/containers/storage   
DEBU[0000] Using static dir /var/lib/containers/storage/libpod 
DEBU[0000] Using tmp dir /var/run/libpod                
DEBU[0000] Using volume path /var/lib/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] cached value indicated that metacopy is being used 
DEBU[0000] cached value indicated that native-diff is not being used 
WARN[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument 
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/library/test_container_root:latest" 
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/library/test_container_root:latest" does not resolve to an image ID 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]localhost/test_container_root:latest" 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@3e255182cea3d831b1ef2b3eeae8ec326bd8346d3f36aa6395d3520988ac8e40" 
DEBU[0000] exporting opaque data as blob "sha256:3e255182cea3d831b1ef2b3eeae8ec326bd8346d3f36aa6395d3520988ac8e40" 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@3e255182cea3d831b1ef2b3eeae8ec326bd8346d3f36aa6395d3520988ac8e40" 
DEBU[0000] exporting opaque data as blob "sha256:3e255182cea3d831b1ef2b3eeae8ec326bd8346d3f36aa6395d3520988ac8e40" 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@3e255182cea3d831b1ef2b3eeae8ec326bd8346d3f36aa6395d3520988ac8e40" 
DEBU[0000] User mount podman_vol_create_test1:/test_mount_location_inside_container1 options [] 
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Using bridge netmode                         
DEBU[0000] created OCI spec and options for new container 
DEBU[0000] Allocated lock 7 for container c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@3e255182cea3d831b1ef2b3eeae8ec326bd8346d3f36aa6395d3520988ac8e40" 
DEBU[0000] exporting opaque data as blob "sha256:3e255182cea3d831b1ef2b3eeae8ec326bd8346d3f36aa6395d3520988ac8e40" 
DEBU[0000] created container "c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c" 
DEBU[0000] container "c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c" has work directory "/var/lib/containers/storage/overlay-containers/c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c/userdata" 
DEBU[0000] container "c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c" has run directory "/var/run/containers/storage/overlay-containers/c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c/userdata" 
DEBU[0000] New container created "c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c" 
DEBU[0000] container "c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c" has CgroupParent "machine.slice/libpod-c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c.scope" 
DEBU[0000] Handling terminal attach                     
DEBU[0000] Made network namespace at /var/run/netns/cni-ea22c82a-f427-8a05-2cee-6bc8333f7221 for container c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c 
INFO[0000] Got pod network &{Name:vibrant_nobel Namespace:vibrant_nobel ID:c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c NetNS:/var/run/netns/cni-ea22c82a-f427-8a05-2cee-6bc8333f7221 Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]} 
INFO[0000] About to add CNI network cni-loopback (type=loopback) 
DEBU[0000] overlay: mount_data=nodev,metacopy=on,lowerdir=/var/lib/containers/storage/overlay/l/MHZP6W4WAYMWM3MU5JRSYW6IY6:/var/lib/containers/storage/overlay/l/BKDAH5YRYYD447ETT6U327MU3I,upperdir=/var/lib/containers/storage/overlay/d3adf5aa43f688a3e9eb99ec1007688b67f0e1166ad95912066c4f205f7daa88/diff,workdir=/var/lib/containers/storage/overlay/d3adf5aa43f688a3e9eb99ec1007688b67f0e1166ad95912066c4f205f7daa88/work,context="system_u:object_r:container_file_t:s0:c328,c801" 
DEBU[0000] mounted container "c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c" at "/var/lib/containers/storage/overlay/d3adf5aa43f688a3e9eb99ec1007688b67f0e1166ad95912066c4f205f7daa88/merged" 
DEBU[0000] Mounted volume podman_vol_create_test1       
DEBU[0000] Volume podman_vol_create_test1 mount count now at 1 
DEBU[0000] Copying up contents from container c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c to volume podman_vol_create_test1 
INFO[0000] Got pod network &{Name:vibrant_nobel Namespace:vibrant_nobel ID:c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c NetNS:/var/run/netns/cni-ea22c82a-f427-8a05-2cee-6bc8333f7221 Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]} 
INFO[0000] About to add CNI network podman (type=bridge) 
DEBU[0000] Created root filesystem for container c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c at /var/lib/containers/storage/overlay/d3adf5aa43f688a3e9eb99ec1007688b67f0e1166ad95912066c4f205f7daa88/merged 
DEBU[0000] [0] CNI result: Interfaces:[{Name:cni-podman0 Mac:22:f4:f7:b5:15:e9 Sandbox:} {Name:veth3b4ca5a2 Mac:72:18:0b:4c:6f:94 Sandbox:} {Name:eth0 Mac:ee:00:26:cd:4e:17 Sandbox:/var/run/netns/cni-ea22c82a-f427-8a05-2cee-6bc8333f7221}], IP:[{Version:4 Interface:0xc000644d28 Address:{IP:10.88.0.7 Mask:ffff0000} Gateway:10.88.0.1}], Routes:[{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}], DNS:{Nameservers:[] Domain: Search:[] Options:[]} 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Setting CGroups for container c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c to machine.slice:libpod:c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] reading hooks from /etc/containers/oci/hooks.d 
DEBU[0000] Created OCI spec for container c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c at /var/lib/containers/storage/overlay-containers/c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -s -c c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c -u c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c/userdata -p /var/run/containers/storage/overlay-containers/c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c/userdata/pidfile -l k8s-file:/var/lib/containers/storage/overlay-containers/c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -t --conmon-pidfile /var/run/containers/storage/overlay-containers/c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c]"
INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c.scope 
DEBU[0000] Received: 1017                               
INFO[0000] Got Conmon PID as 1003                       
DEBU[0000] Created container c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c in OCI runtime 
DEBU[0000] Attaching to container c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c 
DEBU[0000] connecting to socket /var/run/libpod/socket/c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c/attach 
DEBU[0000] Received a resize event: {Width:318 Height:98} 
DEBU[0000] Starting container c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c with command [/bin/sh -c bash /bin/bash] 
DEBU[0000] Started container c1acfcac7d55dfd3e47ab8b15d3167253ee4999984f9016c9ee6b0330989119c 
DEBU[0000] Enabling signal proxying 
  1. Observe that while the mount location is created, there's nothing inside:
[root@c1acfcac7d55 /]# ls -al /test_mount_location_inside_container1/
total 0
drwxr-xr-x. 2 root root  6 Oct 19 14:58 .
drwxr-xr-x. 1 root root 51 Oct 19 15:40 ..
[root@c1acfcac7d55 /]# 

Describe the results you received:
The NFS share is not mounted nor accessible from within the container.

Describe the results you expected:
The NFS share is mounted and accessible from within the container.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

[root@testhost ~]# podman version
Version:            1.6.2
RemoteAPI Version:  1
Go Version:         go1.12.10
OS/Arch:            linux/amd64

Output of podman info --debug:

[root@testhost ~]# podman info --debug
debug:
  compiler: gc
  git commit: ""
  go version: go1.12.10
  podman version: 1.6.2
host:
  BuildahVersion: 1.11.3
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.1-1.fc30.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.1, commit: 4346fbe0b2634b05857973bdf663598081240374'
  Distribution:
    distribution: fedora
    version: "30"
  MemFree: 29456236544
  MemTotal: 33539690496
  OCIRuntime:
    name: runc
    package: runc-1.0.0-93.dev.gitb9b6cc6.fc30.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: e3b4c1108f7d1bf0d09ab612ea09927d9b59b4e3
      spec: 1.0.1-dev
  SwapFree: 16840126464
  SwapTotal: 16840126464
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: testhost
  kernel: 5.2.18-200.fc30.x86_64
  os: linux
  rootless: false
  uptime: 168h 7m 19.81s (Approximately 7.00 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 1
  GraphDriverName: overlay
  GraphOptions:
    overlay.mountopt: nodev,metacopy=on
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  ImageStore:
    number: 2
  RunRoot: /var/run/containers/storage
  VolumePath: /var/lib/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

[root@testhost ~]# rpm -q podman
podman-1.6.2-2.fc30.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):
Fresh minimal install of Fedora 30 with:
yum -y install vim nfs-utils buildah
yum -y distro-sync --enablerepo=updates-testing podman

Exports are visible to host:

[root@testhost ~]# showmount -e 192.168.2.126
Export list for 192.168.2.126:
/exports/test  192.168.2.0/24

Exports can be mounted outside of podman using mount 192.168.2.126:/exports/test /mnt/test so it does not appear to be an issue with the NFS server.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 19, 2019
@mheon
Copy link
Member

mheon commented Oct 19, 2019

This is not NFS specific. The mount command is failing silently under us. I can replicate with podman volume create --opt type=tmpfs --opt device=tmpfs --opt o=invalid testvol and then mounting testvol into a container.

@mheon
Copy link
Member

mheon commented Oct 19, 2019

I strongly suspect that your NFS invocation is incorrect, and that error is somehow being lost. I will note that Podman is invoking the mount binary using the arguments you provide - mount -t <type> -o <o> <device> <volume path> (where type is from --opt type=..., o from --opt o=, and device is from --opt device=.... If you can assemble a working mount command that mounts the filesystem, and disassemble it to fit in those parameters, it should work.

I'll fix the bug here, but your NFS invocation likely still won't work as is.

@mheon
Copy link
Member

mheon commented Oct 20, 2019

Found it. Stupid error on my part. Patch in a few.

mheon added a commit to mheon/libpod that referenced this issue Oct 20, 2019
command.Start() just starts the command. That catches some
errors, but the nasty ones - bad options and similar - happen
when the command runs. Use CombinedOutput() instead - it waits
for the command to exit, and thus catches non-0 exit of the
`mount` command (invalid options, for example).

STDERR from the `mount` command is directly used, which isn't
necessarily the best, but we can't really get much more info on
what went wrong.

Fixes containers#4303

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@mheon
Copy link
Member

mheon commented Oct 20, 2019

#4305 to fix

@toddhpoole
Copy link
Author

Thank you @mheon for describing how podman volume create parses its arguments and uses them to invoke mount. Will you add your comment to podman-volume-create.1.md so that it will be easier for others to find?

To those in the future who might find this bug report via a Google search result, here's how to create an NFS volume using podman volume create then attach that volume to a pre-existing container.

Note: This uses podman 1.6.2, and is run as root.

1. Determine the name of the NFS export you'd like to mount from your container host:

[root@containerhost0 /]# showmount -e nfsserver0
Export list for nfsserver0:
/path/to/exported/share  192.168.2.0/24

2. Create a NFS-backed volume using podman volume create:

root@containerhost0 /]# podman volume create --opt type=nfs4 --opt o=rw --opt device=nfsserver0:/path/to/exported/share/ name_of_podman_volume0
name_of_podman_volume0

3. Confirm that podman is aware of the volume:

[root@containerhost0 /]# podman volume ls
DRIVER   VOLUME NAME
local    name_of_podman_volume0
[root@containerhost0 /]# podman volume inspect name_of_podman_volume0
[
     {
          "Name": "name_of_podman_volume0",
          "Driver": "local",
          "Mountpoint": "/var/lib/containers/storage/volumes/name_of_podman_volume0/_data",
          "CreatedAt": "2019-10-19T17:34:20.212638303-07:00",
          "Labels": {
               
          },
          "Scope": "local",
          "Options": {
               
          }
     }
]

4. Run a container with the volume attached:

[root@containerhost0 /]# podman run --rm --interactive --tty --volume name_of_podman_volume0:/mount_location_inside_container name_of_preexisting_container

5. Confirm you can see and write to your NFS share via the volume you just created:

[root@ca95c8a0f09f /]# ls -al /mount_location_inside_container/
total 20
drwxr-xr-x.  6 1000 1000   3 Oct 19 17:16  .
drwxr-xr-x.  1 root root  45 Oct 19 17:39  ..
-rw-rw-rw-.  1 1000 1000   0 Oct 19 17:16  test.txt
[root@ca95c8a0f09f /]# touch /mount_location_inside_container/test2.txt
[root@ca95c8a0f09f /]# ls -al /mount_location_inside_container/
total 21
drwxr-xr-x.  6 1000 1000   3 Oct 19 17:16  .
drwxr-xr-x.  1 root root  45 Oct 19 17:39  ..
-rw-rw-rw-.  1 1000 1000   0 Oct 19 17:16  test.txt
-rw-rw-rw-.  1 1000 1000   0 Oct 19 17:16  test2.txt

@rhatdan
Copy link
Member

rhatdan commented Oct 20, 2019

Looks like this would be a nice addition to the man page. @toddhpoole Would you like to open a PR to the man page to add your example.

mheon added a commit to mheon/libpod that referenced this issue Oct 23, 2019
command.Start() just starts the command. That catches some
errors, but the nasty ones - bad options and similar - happen
when the command runs. Use CombinedOutput() instead - it waits
for the command to exit, and thus catches non-0 exit of the
`mount` command (invalid options, for example).

STDERR from the `mount` command is directly used, which isn't
necessarily the best, but we can't really get much more info on
what went wrong.

Fixes containers#4303

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
mheon added a commit to mheon/libpod that referenced this issue Oct 23, 2019
command.Start() just starts the command. That catches some
errors, but the nasty ones - bad options and similar - happen
when the command runs. Use CombinedOutput() instead - it waits
for the command to exit, and thus catches non-0 exit of the
`mount` command (invalid options, for example).

STDERR from the `mount` command is directly used, which isn't
necessarily the best, but we can't really get much more info on
what went wrong.

Fixes containers#4303

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@mheon
Copy link
Member

mheon commented Oct 23, 2019

I'll take the docs changes

mheon added a commit to mheon/libpod that referenced this issue Oct 28, 2019
command.Start() just starts the command. That catches some
errors, but the nasty ones - bad options and similar - happen
when the command runs. Use CombinedOutput() instead - it waits
for the command to exit, and thus catches non-0 exit of the
`mount` command (invalid options, for example).

STDERR from the `mount` command is directly used, which isn't
necessarily the best, but we can't really get much more info on
what went wrong.

Fixes containers#4303

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
mheon added a commit to mheon/libpod that referenced this issue Oct 30, 2019
command.Start() just starts the command. That catches some
errors, but the nasty ones - bad options and similar - happen
when the command runs. Use CombinedOutput() instead - it waits
for the command to exit, and thus catches non-0 exit of the
`mount` command (invalid options, for example).

STDERR from the `mount` command is directly used, which isn't
necessarily the best, but we can't really get much more info on
what went wrong.

Fixes containers#4303

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
rh-container-bot pushed a commit to lsm5/podman that referenced this issue Nov 17, 2019
command.Start() just starts the command. That catches some
errors, but the nasty ones - bad options and similar - happen
when the command runs. Use CombinedOutput() instead - it waits
for the command to exit, and thus catches non-0 exit of the
`mount` command (invalid options, for example).

STDERR from the `mount` command is directly used, which isn't
necessarily the best, but we can't really get much more info on
what went wrong.

Fixes containers#4303

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@e-minguez
Copy link
Contributor

Are the docs updated? I've been looking into the official docs http://docs.podman.io/en/latest/search.html?q=nfs&check_keywords=yes&area=default and man pages and there are no comments about NFS.

@computator
Copy link

@mheon those docs changes seem to have never made it in :)

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants