Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workdir rejected when it's inside volume #11352

Closed
rhn opened this issue Aug 29, 2021 · 7 comments · Fixed by #11353
Closed

Workdir rejected when it's inside volume #11352

rhn opened this issue Aug 29, 2021 · 7 comments · Fixed by #11353
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@rhn
Copy link

rhn commented Aug 29, 2021

/kind feature

I try to set workdir to be inside the volume I just set up, but it's rejected, even though podman could check for its existence.

$ podman run --rm -v .:/mnt/sources:O --attach=stdout,stderr --name=cirunn-861192 --workdir=/mnt/sources fedora:33 bash
Error: workdir "/mnt/sources" does not exist on container e034c312dd6046f53347d9d707581c0815424b5e35f3248ce0dc41c66343cff9

Expected result: podman follows the volume mapping to determine whether the working directory is actually usable.

Output of podman version:

podman version 3.3.0

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.22.3
  cgroupControllers: []
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.27-2.fc33.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.27, commit: '
xxx
  distribution:
    distribution: fedora
    version: "33"
  eventLogger: journald
  hostname: xxxxx
  idMappings: xxxx
  kernel: 5.13.6-100.fc33.x86_64
  linkmode: dynamic
xxxxx
  ociRuntime:
    name: crun
    package: crun-0.21-1.fc33.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.21
      commit: c4c3cdf2ce408ed44a9e027c618473e6485c635b
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/xxxxx/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc33.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
xxxxx
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /home/rhn/.config/containers/storage.conf
  containerStore:
    number: 10
    paused: 0
    running: 2
    stopped: 8
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.5.0-1.fc33.x86_64
      Version: |-
        fusermount3 version: 3.9.3
        fuse-overlayfs: version 1.5
        FUSE library version 3.9.3
        using FUSE kernel interface version 7.31
  graphRoot: /home/rhn/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 63
  runRoot: /run/user/xxx
  volumePath: /home/rhn/.local/share/containers/storage/volumes
version:
  APIVersion: 3.3.0
  Built: 1629488174
  BuiltTime: Fri Aug 20 21:36:14 2021
  GitCommit: ""
  GoVersion: go1.15.14
  OsArch: linux/amd64
  Version: 3.3.0

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.3.0-1.fc33.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

@openshift-ci openshift-ci bot added the kind/feature Categorizes issue or PR as related to a new feature. label Aug 29, 2021
@flouthoc
Copy link
Collaborator

@rhn Could you try -v .:/mnt/sources:Z instead of -v .:/mnt/sources:O and check it its working.

@rhn
Copy link
Author

rhn commented Aug 29, 2021

"Z", "rw", "ro" don't cause that error, but I would stop before calling it "working", because of different semantics :)

@flouthoc
Copy link
Collaborator

:O create a temporary overlay and you basically see the upper inside the container. Issue could be with the overlay driver itself does :O works for you when you dont use it as workdir ?

@rhn
Copy link
Author

rhn commented Aug 29, 2021

Yes, the following works and /mnt/sources is populated:

podman run --rm -v .:/mnt/sources:O --attach=stdout,stderr --name=cirunn-861192 --workdir=/ fedora:33 bash

@flouthoc
Copy link
Collaborator

@rhn So i checked we have a bug in podman , overlay mount happens after the check for workdir is completed.

@flouthoc
Copy link
Collaborator

flouthoc commented Aug 30, 2021

@rhn Following patch should fix podman for your use-case https://github.com/containers/podman/pull/11353/files

@rhn
Copy link
Author

rhn commented Sep 3, 2021

Thanks!

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants