Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman volume create seems to pass uid/gid option with the local driver to the mount command #10358

Open
ykuksenko opened this issue May 16, 2021 · 5 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@ykuksenko
Copy link

ykuksenko commented May 16, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug

Description
I would like to create a volume for a container that is owned by a specific user before it is mounted into a container.

When using the podman volume create --opt "o=uid=XXX,gid=YYY" test_volume2 command to pre-create the volume the UID/GID of the volume gets set correctly. The issue is that this is also seems to be forwarded as a mount option to the underlying mount with the local driver and it doesn't seem to support this.

The documentation seems to imply that when you use the local driver the uid/gid options will not be passed through to the underlying mount command because they are not supported. source

When not using the local driver, the given options will be passed directly to the volume plugin. In this case, supported options will be dictated by the plugin in question, not Podman.

Steps to reproduce the issue:

  1. Setup
# dnf install podman -y
# useradd test_user -m
  1. Create volume
# subuid=$(cat /etc/subuid|awk -F ':' '/test_user/ {print $2+ 100}')
# subgid=$(cat /etc/subgid|awk -F ':' '/test_user/ {print $2+ 100}')
# podman volume create --opt "o=uid=$subuid,gid=$subgid" test_volume2
  1. Use volume:
# podman run --rm --subuidname=test_user --subgidname=test_user -v test_volume2:/test_volume2  alpine

Describe the results you received:
Podman run output:

Error: error mounting volume test_volume2 for container 4354295c9d8e075eb7295d1a9583da4e03ccb2971b49b90ac57034186f98f915: error mounting volume test_volume2: mount: /var/lib/containers/storage/volumes/test_volume2/_data: wrong fs type, bad option, bad superblock on , missing codepage or helper program, or other error.

Podman volume inspect:

# podman volume inspect test_volume2
[
    {
        "Name": "test_volume2",
        "Driver": "local",
        "Mountpoint": "/var/lib/containers/storage/volumes/test_volume2/_data",
        "CreatedAt": "2021-05-16T07:33:19.241345831Z",
        "Labels": {},
        "Scope": "local",
        "Options": {
            "GID": "231172",
            "UID": "231172",
            "o": "uid=231172,gid=231172"
        },
        "UID": 231172,
        "GID": 231172
    }
]

Describe the results you expected:
I expected no output.
I also expected the lack of the line "o": "uid=231172,gid=231172" in the volume inspect output.

Example: When creating the volume inline with podman run --rm --subuidname=test_user --subgidname=test_user -v test_volume:/test_volume alpine
The volume inspect looks like:

# podman volume inspect test_volume
[
    {
        "Name": "test",
        "Driver": "local",
        "Mountpoint": "/var/lib/containers/storage/volumes/test/_data",
        "CreatedAt": "2021-05-16T06:53:14.568124297Z",
        "Labels": {},
        "Scope": "local",
        "Options": {},
        "UID": 231072,
        "GID": 231072
    }
]

Notably the Options are set to {}

Additional information you deem important (e.g. issue happens only occasionally):
All of this is run from the root account. The goal is to run user namespaced containers from the root account.
I included a 100 UID offset in the example. But this happens without it too. (I would like to ultimately keep the offset for actual use)

dmesg contains messages like:

[  326.196858] fuseblk: Bad value for 'source'

Output of podman version:

Version:      3.1.2
API Version:  3.1.2
Go Version:   go1.16.3
Built:        Wed May 12 19:27:59 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.20.1
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.27-2.fc34.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.27, commit: '
  cpus: 2
  distribution:
    distribution: fedora
    version: "34"
  eventLogger: journald
  hostname: container
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.11.20-300.fc34.x86_64
  linkmode: dynamic
  memFree: 4945518592
  memTotal: 6223618048
  ociRuntime:
    name: crun
    package: crun-0.19.1-1.fc34.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.19.1
      commit: 1535fedf0b83fb898d449f9680000f729ba719f5
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: true
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 0
  swapTotal: 0
  uptime: 43m 44.5s
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /container_storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 1
  runRoot: /run/containers/storage
  volumePath: /container_storage/volumes
version:
  APIVersion: 3.1.2
  Built: 1620847679
  BuiltTime: Wed May 12 19:27:59 2021
  GitCommit: ""
  GoVersion: go1.16.3
  OsArch: linux/amd64
  Version: 3.1.2

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.1.2-3.fc34.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

  • I checked with podman 3.1.2. and the troubleshooting guide. I did not try the master branch.

Additional environment details (AWS, VirtualBox, physical, etc.):
Using libvirt based vagrant box:
fedora/34-cloud-base (libvirt, 34.20210423.0)

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label May 16, 2021
@rhatdan
Copy link
Member

rhatdan commented May 17, 2021

@mheon PTAL

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@Luap99
Copy link
Member

Luap99 commented Jun 25, 2021

This should be fixed in the main branch

@Luap99 Luap99 closed this as completed Jun 25, 2021
@ykuksenko
Copy link
Author

this is still broken in version 4.3.1, the output is slightly more verbose but basically the same.

@rhatdan
Copy link
Member

rhatdan commented Jan 3, 2023

@mheon PTAL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants