Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman container ignores storage-opt size= when managed using docker-compose #11016

Closed
vikas-goel opened this issue Jul 21, 2021 · 17 comments · Fixed by containers/storage#1035 or #11991
Closed
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@vikas-goel
Copy link
Contributor

vikas-goel commented Jul 21, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
When a podman container is started using Docker Compose that has storage opt size=2G specified, podman ignores the option during container creation. As a result, the size limit is not honored inside the container root file-system. The podman inspect on the container does not show any reference of size specification.

Steps to reproduce the issue:

  1. Setup podman environment to run with docker-compose

  2. Create a docker compose file for a container

    storage_opt:
      size: 2G
  1. Run container using docker-compose command
    docker-compose -f mysvc.yml up -d mysvc

Describe the results you received:

dd command inside container can keep writing on to the root file-system regardless of what's specified in the Docker Compose file

Describe the results you expected:
dd command should have failed after 2G writes

Additional information you deem important (e.g. issue happens only occasionally):
Consistently

Output of podman version:

Version:      3.3.0-dev
API Version:  3.3.0-dev
Go Version:   go1.16.5
Git Commit:   6678385abc34521dc85a0a549ee306b73ebc2911
Built:        Wed Jul 21 14:14:44 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.22.0-dev
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.26-1.module+el8.4.0+10607+f4da7515.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.26, commit: b883692702312720058141f16b6002ab26ead2e7'
  cpus: 8
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: file
  hostname: flex-vm-02.dc2.ros2100.veritas.com
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-305.el8.x86_64
  linkmode: dynamic
  memFree: 13595041792
  memTotal: 33511845888
  ociRuntime:
    name: runc
    package: runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 16924012544
  swapTotal: 16924012544
  uptime: 43h 20m 13.44s (Approximately 1.79 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 2
    stopped: 2
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 2
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.3.0-dev
  Built: 1626891284
  BuiltTime: Wed Jul 21 14:14:44 2021
  GitCommit: 6678385abc34521dc85a0a549ee306b73ebc2911
  GoVersion: go1.16.5
  OsArch: linux/amd64
  Version: 3.3.0-dev

Package info (e.g. output of rpm -q podman or apt list podman):

private build from main branch

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):
VMware VM

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Aug 23, 2021

@mheon @baude @jwhonce How should we handle this. I don't believe we allow users to specify currently the storage-opts to the remote service.

Is this a key feature of compose to support this, I would guess if we wanted to support this, we would need to compare the "default" size versus the specified size of the remote client, and make sure the specified size is less then the default.

@vikas-goel
Copy link
Contributor Author

I am not sure how Docker plays with the storage-opt value set default vs passed as command line Agilent. Logically, I would think when the command line argument is passed it is honored regardless of what the default is. Otherwise, the default is used.

That is, the command line argument should override the default no matter more or less.

@rhatdan
Copy link
Member

rhatdan commented Aug 31, 2021

Sure, and that is how it works for the client. The issue is we don't pass those values to the server. If you set them on the server side it would be followed.

@vikas-goel
Copy link
Contributor Author

In my use-case, I don’t set any default value. There is different storage-opt policy for different containers depending on their (internal) types.

@rhatdan
Copy link
Member

rhatdan commented Sep 1, 2021

@jwhonce Is this a bug in our docker API compatibility? Do we support passing some storage options?

@mheon
Copy link
Member

mheon commented Sep 1, 2021

We formerly had some support (in the frontend, but not the backend) but you ripped it out of the podman run options list because it was breaking local Podman's --storage-opt flag.

I think we need to add it back, and figure out how to wire it into c/storage such that the options are actually respected.

@rhatdan
Copy link
Member

rhatdan commented Sep 1, 2021

SGTM. The question is should we support this from podman-remote.

@jwhonce
Copy link
Member

jwhonce commented Sep 1, 2021

@rhatdan The HostConfig.StorageOpt[] are accepted by the API and then passed down into pkg/specgenutil/specgen.go:FillOutSpecGen() where it appears to not be processed.

@mheon I assume getting specgen to process StorageOpts is the first step?

@mheon
Copy link
Member

mheon commented Sep 2, 2021

I think the first step is determining whether c/storage even supports configuring these on a per-container (rather than per-boot) level right now - if we can't pass the options all the way down, no point.

Also, I swear @rhatdan removed HostConfig.StorageOpt[] (or maybe it was just the CLI flag that referred to it?)

@mheon
Copy link
Member

mheon commented Sep 2, 2021

s/per-boot/per-instance - Basically, we pass in storage options once (at store init) right now. Can we pass them down as part of creating a container instead? Don't know, need to investigate.

@rhatdan
Copy link
Member

rhatdan commented Sep 2, 2021

I know overlay can support it, but not sure if there is a mechanism to do it at container creation time. Might need to add new interfaces to the driver library.

@github-actions
Copy link

github-actions bot commented Oct 3, 2021

A friendly reminder that this issue had no activity for 30 days.

rhatdan added a commit to rhatdan/storage that referenced this issue Oct 4, 2021
Drivers have the ability to support size and inodes quota, but
we don't allow these options to be passed down to the driver.

With this fix, Podman should be able to support --storage-opt size=
per container.

Helps Fix: containers/podman#11016

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/storage that referenced this issue Oct 4, 2021
Drivers have the ability to support size and inodes quota, but
we don't allow these options to be passed down to the driver.

With this fix, Podman should be able to support --storage-opt size=
per container.

Helps Fix: containers/podman#11016

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@vikas-goel
Copy link
Contributor Author

Thank you @rhatdan. Will storage-opt option when used with docker-compose work now?

@mheon
Copy link
Member

mheon commented Oct 5, 2021

No, still needs to be plumbed in on the Podman side.

@mheon mheon reopened this Oct 5, 2021
@rhatdan
Copy link
Member

rhatdan commented Oct 5, 2021

Do you have a curl command that I could trigger this behaviour. I have most of the plumbing done, but I want to make sure everything works correctly.

@jwhonce Do you have a podman-py test we could run?

@jwhonce
Copy link
Member

jwhonce commented Oct 12, 2021

@rhatdan

$ curl -X POST -v -H "Content-Type: application/json" --data-binary @/tmp/body1 http://localhost:8080/containers/create?name=Ctnr001

$ more /tmp/body1
{"Image": "quay.io/libpod/alpine:latest", "HostConfig": {"StorageOpt": {"Size":"1228800"}}}

rhatdan added a commit to rhatdan/podman that referenced this issue Oct 15, 2021
Fixes: containers#11016

[NO NEW TESTS NEEDED] We have no easy way to tests this in
CI/CD systems.  Requires quota to be setup on directories to work.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Oct 15, 2021
Fixes: containers#11016

[NO NEW TESTS NEEDED] We have no easy way to tests this in
CI/CD systems.  Requires quota to be setup on directories to work.

Fixes: containers#11016

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Oct 15, 2021
Fixes: containers#11016

[NO NEW TESTS NEEDED] We have no easy way to tests this in
CI/CD systems.  Requires quota to be setup on directories to work.

Fixes: containers#11016

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Oct 18, 2021
Fixes: containers#11016

[NO NEW TESTS NEEDED] We have no easy way to tests this in
CI/CD systems.  Requires quota to be setup on directories to work.

Fixes: containers#11016

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Oct 18, 2021
Fixes: containers#11016

[NO NEW TESTS NEEDED] We have no easy way to tests this in
CI/CD systems.  Requires quota to be setup on directories to work.

Fixes: containers#11016

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Oct 18, 2021
Fixes: containers#11016

[NO NEW TESTS NEEDED] We have no easy way to tests this in
CI/CD systems.  Requires quota to be setup on directories to work.

Fixes: containers#11016

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Oct 18, 2021
Fixes: containers#11016

[NO NEW TESTS NEEDED] We have no easy way to tests this in
CI/CD systems.  Requires quota to be setup on directories to work.

Fixes: containers#11016

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
4 participants