Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman behavior with shared volumes : copying files from mounted path #12714

Closed
pycloux opened this issue Dec 29, 2021 · 0 comments · Fixed by #12733
Closed

Podman behavior with shared volumes : copying files from mounted path #12714

pycloux opened this issue Dec 29, 2021 · 0 comments · Fixed by #12733
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@pycloux
Copy link

pycloux commented Dec 29, 2021

/kind bug

Description

I don't know if this is "on purpose" but the behavior with named volumes is different between Docker and Podman.
If you mount an "empty" volume into a docker container to a new path (say "/opt/test") and then mount the same volume to another container to a "non-empty" path, then the content of the "non-empty" path will be copied on to the volume.
With Podman, the volume will remain "empty" as soon as it was mounted to one container.
Docker is more permissive and will allow the data to be copied if the volume is empty (even if it was mounted previously).

Steps to reproduce the issue:

  1. create a new volume
$ podman volume create volume-test
  1. create a container with the volume mapped to an empty location
$ podman run --rm -ti --name=centos-test -v volume-test:/opt/test centos:7
  1. create a container with the volume mapped to a location with some pre-existing content:
$ podman run --rm -ti --name=centos-test -v volume-test:/var centos:7

Describe the results you received:

When running ls on the volume, that is to say either:

  • ls /opt/test in the first container
  • ls /var in the second container

The folder is empty.

Describe the results you expected:

With Docker, the folder would contain the content copied from the /var directory of the centos:7 container.

Additional information you deem important (e.g. issue happens only occasionally):

None

Output of podman version:

Version:      3.3.1
API Version:  3.3.1
Go Version:   go1.16.7
Built:        Wed Nov 10 01:48:06 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.22.3
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.29-1.module+el8.5.0+710+4c471e88.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: 7a80c42d2005a1f098cd800cd1db2b7a4ba8a8ae'
  cpus: 2
  distribution:
    distribution: '"rocky"'
    version: "8.4"
  eventLogger: file
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 4.18.0-305.3.1.el8_4.x86_64
  linkmode: dynamic
  memFree: 7776194560
  memTotal: 8347975680
  ociRuntime:
    name: runc
    package: runc-1.0.2-1.module+el8.5.0+710+4c471e88.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.2
      spec: 1.0.2-dev
      go: go1.16.7
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.module+el8.5.0+710+4c471e88.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.1
  swapFree: 2147479552
  swapTotal: 2147479552
  uptime: 1h 55m 35.28s (Approximately 0.04 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/vagrant/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.7.1-1.module+el8.5.0+710+4c471e88.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.7.1
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /home/vagrant/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 2
  runRoot: /run/user/1000/containers
  volumePath: /home/vagrant/.local/share/containers/storage/volumes
version:
  APIVersion: 3.3.1
  Built: 1636508886
  BuiltTime: Wed Nov 10 01:48:06 2021
  GitCommit: ""
  GoVersion: go1.16.7
  OsArch: linux/amd64
  Version: 3.3.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.3.1-9.module+el8.5.0+710+4c471e88.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Podman troubleshooting guide : YES

Podman lastest release (3.4.0) : NO

Additional environment details (AWS, VirtualBox, physical, etc.):

Running in Vagrant with Virtualbox provider.

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 29, 2021
rhatdan added a commit to rhatdan/podman that referenced this issue Jan 3, 2022
Currently Docker copies up the first volume on a mountpoint with
data.

Fixes: containers#12714

Also added NeedsCopyUP, NeedsChown and MountCount to the podman volume
inspect code.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Jan 3, 2022
Currently Docker copies up the first volume on a mountpoint with
data.

Fixes: containers#12714

Also added NeedsCopyUP, NeedsChown and MountCount to the podman volume
inspect code.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Jan 4, 2022
Currently Docker copies up the first volume on a mountpoint with
data.

Fixes: containers#12714

Also added NeedsCopyUP, NeedsChown and MountCount to the podman volume
inspect code.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Jan 4, 2022
Currently Docker copies up the first volume on a mountpoint with
data.

Fixes: containers#12714

Also added NeedsCopyUP, NeedsChown and MountCount to the podman volume
inspect code.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Jan 6, 2022
Currently Docker copies up the first volume on a mountpoint with
data.

Fixes: containers#12714

Also added NeedsCopyUP, NeedsChown and MountCount to the podman volume
inspect code.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant