Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

engine.detach_keys is not used correctly with podman compose #22756

Open
stacyharper opened this issue May 20, 2024 · 5 comments
Open

engine.detach_keys is not used correctly with podman compose #22756

stacyharper opened this issue May 20, 2024 · 5 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. stale-issue

Comments

@stacyharper
Copy link

stacyharper commented May 20, 2024

Issue Description

containers started with "podman compose run" are not using my configured detach_keys

Steps to reproduce the issue

Put this in .config/containers/containers.conf:

[engine]
detach_keys="ctrl-x,x"

Run a container podman run --rm -it alpine sh, and use <ctrl-x>,x, it detach the container.

Now, prepare this docker-compose.yml:

services:
  foo:
    image: alpine

And run the container with podman compose run --rm foo sh, and try to <ctrl-x>,x, it does not detach.

Describe the results you received

The container still use the default detach value <C-p><C-q>

Describe the results you expected

podman compose should wrap all kind of docker-compose arguments with its configured values

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.7
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-r0
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: unknown'
  cpuUtilization:
    idlePercent: 97.76
    systemPercent: 0.69
    userPercent: 1.55
  cpus: 12
  databaseBackend: sqlite
  distribution:
    distribution: alpine
    version: 3.20.0_rc1
  eventLogger: file
  freeLocks: 2022
  hostname: yellow-orcess
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.6.31-0-lts
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 999211008
  memTotal: 16711921664
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.10.0-r0
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: netavark-1.10.3-r0
    path: /usr/libexec/podman/netavark
    version: netavark 1.10.3
  ociRuntime:
    name: crun
    package: crun-1.15-r0
    path: /usr/bin/crun
    version: |-
      crun version 1.15
      commit: e6eacaf4034e84185fd8780ac9262bbf57082278
      rundir: /run/user-1000/crun
      spec: 1.0.0
      +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: /run/user-1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.3-r0
    version: |-
      slirp4netns version 1.2.3
      commit: c22fde291bb35b354e6ca44d13be181c76a0a432
      libslirp: 4.8.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 4294701056
  swapTotal: 4294963200
  uptime: 7h 45m 27.00s (Approximately 0.29 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /home/stacy/.config/containers/storage.conf
  containerStore:
    number: 23
    paused: 0
    running: 14
    stopped: 9
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.13-r0
      Version: |-
        fuse-overlayfs: version 1.13-dev
        fusermount3 version: 3.16.2
        FUSE library version 3.16.2
        using FUSE kernel interface version 7.38
  graphRoot: /home/stacy/.local/share/containers/storage
  graphRootAllocated: 486350540800
  graphRootUsed: 398792589312
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 121
  runRoot: /run/user-1000/containers
  transientStore: false
  volumePath: /home/stacy/.local/share/containers/storage/volumes
version:
  APIVersion: 4.9.4
  Built: 1715930749
  BuiltTime: Fri May 17 09:25:49 2024
  GitCommit: ""
  GoVersion: go1.22.3
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.4

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

I'm on alpine linux, using all of those packages:

podman
podman-docker
docker-cli-compose
fuse-overlayfs

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

@stacyharper stacyharper added the kind/bug Categorizes issue or PR as related to a bug. label May 20, 2024
@JayKayy
Copy link
Contributor

JayKayy commented May 20, 2024

I was trying to debug this and could not reproduce the issue. However, I'm wondering if it has to do with the specific config file that you modified in the first step. Looking at the docs in man containers.conf:

Container engines read the /usr/share/containers/containers.conf, /etc/containers/containers.conf, and /etc/containers/containers.conf.d/.conf files if they exist.
When running in rootless mode, they also read $HOME/.config/containers/containers.conf and
$HOME/.config/containers/containers.conf.d/
.conf files.

It mentions that the file you modified in your first step (~/.config/containers/containers.conf) only applies to rootless mode containers. I'm wondering if the podman compose is for some reason not running the container as rootless.

If you take that settings provided and put the config changes in one of the global config files, do you still see this issue?

@stacyharper
Copy link
Author

Ah! I think I just had to restart the podman socket daemon... Now after a reboot, I don't reproduce anymore

@stacyharper
Copy link
Author

stacyharper commented May 21, 2024

Maybe it is unrelated to this ticket, but I still see a difference in the behavior between podman compose and podman run contexts.

With podman run: there is no <c-p><c-q> shortcut at all

With podman compose run: there is still a <c-p><c-q>, additionnaly to <c-x>x

The following paste show first a <c-p><c-q>, then the <c-x>x

[stacy@yellow-orcess ~/tmp]$ podman compose run --rm foo sh
>>>> Executing external compose provider "/usr/bin/docker-compose". Please refer to the documentation for details. <<<<

/ # ERRO[0002] error waiting for container: context canceled
[stacy@yellow-orcess ~/tmp]$ podman compose run --rm foo sh
>>>> Executing external compose provider "/usr/bin/docker-compose". Please refer to the documentation for details. <<<<

/ # Error: detached from container
                                  pwd

^C[stacy@yellow-orcess ~/tmp]$
``

@rhatdan
Copy link
Member

rhatdan commented May 21, 2024

The containers.conf files are onlly read at podman start, in one case you have a podman running locally which reads the containers.conf, In the podman compose case you are talking to the podman service which could have been running before the containers.conf was set?

You can eliminate the podman compose part and just do

podman --remote run ...

And see if the detach keys work properly, after service restart.

Copy link

A friendly reminder that this issue had no activity for 30 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. stale-issue
Projects
None yet
Development

No branches or pull requests

3 participants