Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman run is not honoring --userns=keep-id --user=1000:1000 settings while creating volumes #16741

Open
queeup opened this issue Dec 5, 2022 · 11 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@queeup
Copy link

queeup commented Dec 5, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

With below command I expect to have user:1000 created volumes but instead I am having user:100000

❯ podman run --rm --detach --name syncthing --restart=no --network=host --userns=keep-id --user=1000:1000 --volume syncthing_data-test:/var/syncthing docker.io/syncthing/syncthing

❯ ll ~/.local/share/containers/storage/volumes/
0700 drwx------@ - 100000  5 Dec 16:51 syncthing_data-test

❯ ls ~/.local/share/containers/storage/volumes/syncthing_data-test/
"/var/home/queeup/.local/share/containers/storage/volumes/syncthing_data-test/": Permission denied (os error 13)

❯ sudo ll ~/.local/share/containers/storage/volumes/syncthing_data-test/
0755 drwxr-xr-x@ - queeup  5 Dec 16:51 _data

Steps to reproduce the issue:

  1. Run syncthing container with this command:
podman run --rm --detach --name syncthing --restart=no --network=host --userns=keep-id --user=1000:1000 --volume syncthing_data-test:/var/syncthing docker.io/syncthing/syncthing
  1. Check owner of your syncthing_data-test volume created by podman run command
    .

Describe the results you received:
podman run command is creating volume owned by user 100000 if I use these --userns=keep-id --user=1000:1000 options

Describe the results you expected:
I expected volumes are created and owned by user 1000 podman volumes while using --userns=keep-id --user=1000:1000 options.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Client:       Podman Engine
Version:      4.3.1
API Version:  4.3.1
Go Version:   go1.19.2
Built:        Fri Nov 11 18:01:27 2022
OS/Arch:      linux/amd64

Output of podman info:

host:
  arch: amd64
  buildahVersion: 1.28.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.5-1.fc37.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.5, commit: '
  cpuUtilization:
    idlePercent: 92.45
    systemPercent: 2.32
    userPercent: 5.23
  cpus: 8
  distribution:
    distribution: fedora
    variant: silverblue
    version: "37"
  eventLogger: journald
  hostname: fedora-t480
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.0.10-300.fc37.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 6961352704
  memTotal: 33409437696
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.7-1.fc37.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.7
      commit: 40d996ea8a827981895ce22886a9bac367f87264
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-8.fc37.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 41h 47m 11.00s (Approximately 1.71 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/queeup/.config/containers/storage.conf
  containerStore:
    number: 10
    paused: 0
    running: 9
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/queeup/.local/share/containers/storage
  graphRootAllocated: 998500204544
  graphRootUsed: 472483315712
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 32
  runRoot: /run/user/1000/containers
  volumePath: /var/home/queeup/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 1668178887
  BuiltTime: Fri Nov 11 18:01:27 2022
  GitCommit: ""
  GoVersion: go1.19.2
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

Package info (e.g. output of rpm -q podman or apt list podman or brew info podman):

❯ rpm -q podman
podman-4.3.1-1.fc37.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

No

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 5, 2022
@mheon
Copy link
Member

mheon commented Dec 5, 2022

What is the UID of the user running podman - is it 1000? Is the attempt here to have the volume owned by UID 1000 in the container, which is mapped to UID 1000 on the host? If so, --user 1000:1000 can probably be omitted, we'll do that by default on --userns=keep-id being passed.

@queeup
Copy link
Author

queeup commented Dec 5, 2022

Yea I was trying to achieve that.

My UID (user running podman) is 1000

But I don't get it this:

With --userns=keep-id --user 1000:1000:
Created volume 100000 but volume contents are 1000

Without --userns=keep-id and just use --user 1000:1000:
Created volume is 1000 but content 100999

If I create volume manually with podman volume create, It creates 1000 then use --userns=keep-id --user 1000:1000 for container I can have what I want but I just want to have this without manually create my volume.

@rhatdan
Copy link
Member

rhatdan commented Dec 6, 2022

The 1000:1000 mapping to 100999 is definitely a bug in how we are calculating UID 1000 within the container.

@queeup
Copy link
Author

queeup commented Dec 6, 2022

Also with pods

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: syncthing-pod
  name: syncthing-pod
spec:
  containers:
  - image: docker.io/syncthing/syncthing:latest
    name: syncthing
    hostUsers: false
    securityContext:
      runAsGroup: 1000
      runAsUser: 1000
    volumeMounts:
    - mountPath: /var/syncthing
      name: syncthing_data-pvc
  restartPolicy: Never
  volumes:
  - name: syncthing_data-pvc
    persistentVolumeClaim:
      claimName: syncthing_data-test
❯ podman kube play syncthing-pod-test.yaml
Pod:
f14bbab18068b0ca1c7267f8886e7f06a912594f93fad851ae09a410161b132f
Container:
eaf706dfa9f0f7b5a97e32f8c648af919b18b3c4504aef53fff58fac3b8765c6

❯ exa --long --octal-permissions --numeric /var/home/queeup/.local/share/containers/storage/volumes/
0700 drwx------@ - 1000  7 Dec 00:46 syncthing_data-test

❯ exa --long --octal-permissions --numeric /var/home/queeup/.local/share/containers/storage/volumes/syncthing_data-test/
0755 drwxr-xr-x@ - 100999  7 Dec 00:46 _data

With "--userns=keep-id":

❯podman kube play --userns keep-id syncthing-pod-test.yaml
Pod:
51e61e09770df91283e529a44802cf3cbce0288f30203d2352244bd3963502b9
Container:
858cdd18f0972a8f743cd2e6866a47e4dc47512c0d9c21e31f8c36652ceeaf7f

❯ exa --long --octal-permissions --numeric /var/home/queeup/.local/share/containers/storage/volumes/
0700 drwx------@ - 100000  7 Dec 00:48 syncthing_data-test

❯ exa --long --octal-permissions --numeric /var/home/queeup/.local/share/containers/storage/volumes/syncthing_data-test/
"/var/home/queeup/.local/share/containers/storage/volumes/syncthing_data-test/": Permission denied (os error 13)

❯ sudo exa --long --octal-permissions --numeric /var/home/queeup/.local/share/containers/storage/volumes/syncthing_data-test
0755 drwxr-xr-x@ - 1000  7 Dec 00:48 _data

rhatdan added a commit to rhatdan/podman that referenced this issue Dec 7, 2022
When running containers with

podman run --userns=keep-id --userns $UID:$GID -v test:/tmp/test ...

The volumes directory ends up being owned by root within the user
namespace, which is not root of the general namespace nor the users
uid.

If we just allow podman to create these directories with the same UID
that is running podman, IE don't chown, they end up with the correct
UID in all cases.

The actual volume will be chowned to the UID of the container.

Fixes: containers#16741

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Dec 9, 2022
When running containers with

podman run --userns=keep-id --userns $UID:$GID -v test:/tmp/test ...

The volumes directory ends up being owned by root within the user
namespace, which is not root of the general namespace nor the users
uid.

If we just allow podman to create these directories with the same UID
that is running podman, IE don't chown, they end up with the correct
UID in all cases.

The actual volume will be chowned to the UID of the container.

Fixes: containers#16741

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Dec 21, 2022
When running containers with

podman run --userns=keep-id --userns $UID:$GID -v test:/tmp/test ...

The volumes directory ends up being owned by root within the user
namespace, which is not root of the general namespace nor the users
uid.

If we just allow podman to create these directories with the same UID
that is running podman, IE don't chown, they end up with the correct
UID in all cases.

The actual volume will be chowned to the UID of the container.

Fixes: containers#16741

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Dec 22, 2022
When running containers with

podman run --userns=keep-id --userns $UID:$GID -v test:/tmp/test ...

The volumes directory ends up being owned by root within the user
namespace, which is not root of the general namespace nor the users
uid.

If we just allow podman to create these directories with the same UID
that is running podman, IE don't chown, they end up with the correct
UID in all cases.

The actual volume will be chowned to the UID of the container.

Fixes: containers#16741

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Dec 22, 2022
When running containers with

podman run --userns=keep-id --userns $UID:$GID -v test:/tmp/test ...

The volumes directory ends up being owned by root within the user
namespace, which is not root of the general namespace nor the users
uid.

If we just allow podman to create these directories with the same UID
that is running podman, IE don't chown, they end up with the correct
UID in all cases.

The actual volume will be chowned to the UID of the container.

Fixes: containers#16741

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Dec 22, 2022
When running containers with

podman run --userns=keep-id --userns $UID:$GID -v test:/tmp/test ...

The volumes directory ends up being owned by root within the user
namespace, which is not root of the general namespace nor the users
uid.

If we just allow podman to create these directories with the same UID
that is running podman, IE don't chown, they end up with the correct
UID in all cases.

The actual volume will be chowned to the UID of the container.

Fixes: containers#16741

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Dec 26, 2022
When running containers with

podman run --userns=keep-id --userns $UID:$GID -v test:/tmp/test ...

The volumes directory ends up being owned by root within the user
namespace, which is not root of the general namespace nor the users
uid.

If we just allow podman to create these directories with the same UID
that is running podman, IE don't chown, they end up with the correct
UID in all cases.

The actual volume will be chowned to the UID of the container.

Fixes: containers#16741

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@github-actions
Copy link

github-actions bot commented Jan 6, 2023

A friendly reminder that this issue had no activity for 30 days.

@AceBlade258
Copy link

AceBlade258 commented Jan 10, 2023

I may also be experiencing an issue related to this; I am unable to start pods or containers with --uidmap or --gidmap.

Here is a debug log creating and starting a pod with no maps: https://pastebin.com/NgNs2tLk

Here is a debug log attempting to create and start a pod with a map (intentionally) overlapping an existing user: https://pastebin.com/eSDxz514

In my case, I want some of the programs running in the pod to run as the host system user:group 516000013:516000012.

Edit: Additional info if it is relevant:

[aceblade258@fs01 ~]$ sudo podman version
Client:       Podman Engine
Version:      4.3.1
API Version:  4.3.1
Go Version:   go1.18.7
Built:        Fri Nov 11 08:24:13 2022
OS/Arch:      linux/amd64
[aceblade258@fs01 ~]$ sudo podman info
host:
  arch: amd64
  buildahVersion: 1.28.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.5-1.fc36.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.5, commit: '
  cpuUtilization:
    idlePercent: 99.25
    systemPercent: 0.39
    userPercent: 0.36
  cpus: 8
  distribution:
    distribution: fedora
    version: "36"
  eventLogger: journald
  hostname: fs01.core.kionade.com
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.0.15-200.fc36.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 133604745216
  memTotal: 135075651584
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.7.2-2.fc36.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.7.2
      commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
    version: |-
      slirp4netns version 1.2.0-beta.0
      commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 3h 51m 32.00s (Approximately 0.12 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 64956080128
  graphRootUsed: 2486108160
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 3
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 1668180253
  BuiltTime: Fri Nov 11 08:24:13 2022
  GitCommit: ""
  GoVersion: go1.18.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

[aceblade258@fs01 ~]$ sudo rpm -q podman
podman-4.3.1-1.fc36.x86_64

rhatdan added a commit to rhatdan/podman that referenced this issue Jan 11, 2023
When running containers with

podman run --userns=keep-id --userns $UID:$GID -v test:/tmp/test ...

The volumes directory ends up being owned by root within the user
namespace, which is not root of the general namespace nor the users
uid.

If we just allow podman to create these directories with the same UID
that is running podman, IE don't chown, they end up with the correct
UID in all cases.

The actual volume will be chowned to the UID of the container.

Fixes: containers#16741

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Feb 19, 2023

@giuseppe PTAL

@giuseppe
Copy link
Member

@rhatdan I'd expect your open PR to fix this issue.

The cause of the issue is that we chown the volume to the root user in the user namespace, that in the case of --userns=keep-id is the first additional ID assigned to the user. From the configuration above, I can see it is 100000.

    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536

rhatdan added a commit to rhatdan/podman that referenced this issue Mar 13, 2023
When running containers with

podman run --userns=keep-id --userns $UID:$GID -v test:/tmp/test ...

The volumes directory ends up being owned by root within the user
namespace, which is not root of the general namespace nor the users
uid.

If we just allow podman to create these directories with the same UID
that is running podman, IE don't chown, they end up with the correct
UID in all cases.

The actual volume will be chowned to the UID of the container.

Fixes: containers#16741

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

rhatdan added a commit to rhatdan/podman that referenced this issue Mar 27, 2023
When running containers with

podman run --userns=keep-id --userns $UID:$GID -v test:/tmp/test ...

The volumes directory ends up being owned by root within the user
namespace, which is not root of the general namespace nor the users
uid.

If we just allow podman to create these directories with the same UID
that is running podman, IE don't chown, they end up with the correct
UID in all cases.

The actual volume will be chowned to the UID of the container.

Fixes: containers#16741

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@francoism90
Copy link

Any update on this? I'm having the same issue on Fedora Silverblue. It keeps using the wrong ID.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants