Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Compose health check disable flag is not handled correctly #14493

Closed
aksiksi opened this issue Jun 6, 2022 · 3 comments · Fixed by #14626
Closed

Docker Compose health check disable flag is not handled correctly #14493

aksiksi opened this issue Jun 6, 2022 · 3 comments · Fixed by #14626
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@aksiksi
Copy link

aksiksi commented Jun 6, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Podman seems to ignore the healthcheck disable flag set via Docker Compose.

Steps to reproduce the issue:

  1. Setup Podman service (to emulate a Docker socket).

  2. Create a Docker Compose service for a container that has healthchecks defined and then disable them using healthcheck (example):

version: "3.7"
dashy:
  image: docker.io/lissy93/dashy:latest
  container_name: dashy
  volumes:
    - /var/volumes/dashy:/app/public
  ports:
    - "4000:80"
  restart: unless-stopped
  healthcheck:
    disable: true

Describe the results you received:

Health checks are not disabled properly. Inspecting the container (for example: podman inspect dashy):

"Healthcheck":{
   "Test":[
      "CMD-SHELL",
      "NONE"
   ],
   "Interval":30000000000,
   "Timeout":30000000000,
   "Retries":3
}

If we remove the healthcheck section from the service definition, we see the default healthcheck, as expected:

"Healthcheck":{
   "Test":[
      "CMD-SHELL",
      "yarn health-check"
   ],
   "StartPeriod":30000000000,
   "Interval":300000000000,
   "Timeout":2000000000
}

Describe the results you expected:

When we run the same container directly using the Podman CLI and pass in --no-healthcheck, we get the desired behavior:

$ sudo podman run --no-healthcheck -d -v /var/volumes/dashy:/app/public docker.io/lissy93/dashy:latest
$ podman inspect [...]
...
"Healthcheck":{
   "Test":[
      "NONE"
   ]
}

The Docker Compose healthcheck disable flag should be handled the same way.

Output of podman version:

Version:      3.4.2
API Version:  3.4.2
Go Version:   go1.15.2
Built:        Wed Dec 31 19:00:00 1969
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.23.1
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.30, commit: '
  cpus: 8
  distribution:
    codename: focal
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: nabeul
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.4.0-113-generic
  linkmode: dynamic
  logDriver: journald
  memFree: 3881971712
  memTotal: 12260155392
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version UNKNOWN
      commit: ea1fe3938eefa14eb707f1d22adff4db670645d6
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: true
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.1.8
      commit: unknown
      libslirp: 4.3.1-git
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.3
  swapFree: 0
  swapTotal: 0
  uptime: 6h 15m 45.9s (Approximately 0.25 days)
plugins:
  log:
  - k8s-file
  - none
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 18
    paused: 0
    running: 14
    stopped: 4
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 20
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.4.2
  Built: 0
  BuiltTime: Wed Dec 31 19:00:00 1969
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 3.4.2

Package info (e.g. output of rpm -q podman or apt list podman):

Listing... Done
podman/unknown,now 100:3.4.2-5 amd64 [installed]
podman/unknown 100:3.4.2-5 arm64
podman/unknown 100:3.4.2-5 armhf
podman/unknown 100:3.4.2-5 s390x

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes - also tested with a version built manually:

Client:       Podman Engine
Version:      4.0.0-dev
API Version:  4.0.0-dev
Go Version:   go1.16.5
Git Commit:   b1d37a7e21bfb3e12af2e7cee25dc88ac4f148dd-dirty
Built:        Mon Dec 31 19:00:00 1979
OS/Arch:      linux/amd64

Additional environment details (AWS, VirtualBox, physical, etc.):

Running in KVM on Proxmox.

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 6, 2022
@mheon
Copy link
Member

mheon commented Jun 6, 2022

I'll take this one

@mheon mheon self-assigned this Jun 6, 2022
@mheon
Copy link
Member

mheon commented Jun 13, 2022

@jakecorrenti Can you take this one? I'm not going to find time this week

@jakecorrenti
Copy link
Member

Sure thing, I can take a look

jakecorrenti pushed a commit to jakecorrenti/podman that referenced this issue Jun 16, 2022
Previously, if a container had healthchecks disabled in the
docker-compose.yml file and the user did a `podman inspect <container>`,
they would have an incorrect output:

```
"Healthcheck":{
   "Test":[
      "CMD-SHELL",
      "NONE"
   ],
   "Interval":30000000000,
   "Timeout":30000000000,
   "Retries":3
}
```

After a quick change, the correct output is now the result:
```
"Healthcheck":{
   "Test":[
      "NONE"
   ]
}
```

Closes: containers#14493

Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
jakecorrenti pushed a commit to jakecorrenti/podman that referenced this issue Jun 16, 2022
Previously, if a container had healthchecks disabled in the
docker-compose.yml file and the user did a `podman inspect <container>`,
they would have an incorrect output:

```
"Healthcheck":{
   "Test":[
      "CMD-SHELL",
      "NONE"
   ],
   "Interval":30000000000,
   "Timeout":30000000000,
   "Retries":3
}
```

After a quick change, the correct output is now the result:
```
"Healthcheck":{
   "Test":[
      "NONE"
   ]
}
```

Closes: containers#14493

Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
jakecorrenti pushed a commit to jakecorrenti/podman that referenced this issue Jun 17, 2022
Previously, if a container had healthchecks disabled in the
docker-compose.yml file and the user did a `podman inspect <container>`,
they would have an incorrect output:

```
"Healthcheck":{
   "Test":[
      "CMD-SHELL",
      "NONE"
   ],
   "Interval":30000000000,
   "Timeout":30000000000,
   "Retries":3
}
```

After a quick change, the correct output is now the result:
```
"Healthcheck":{
   "Test":[
      "NONE"
   ]
}
```

Additionally, I extracted the hard-coded strings that were used for
comparisons into constants in `libpod/define` to prevent a similar issue
from recurring.

Closes: containers#14493

Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
jakecorrenti pushed a commit to jakecorrenti/podman that referenced this issue Jul 5, 2022
Previously, if a container had healthchecks disabled in the
docker-compose.yml file and the user did a `podman inspect <container>`,
they would have an incorrect output:

```
"Healthcheck":{
   "Test":[
      "CMD-SHELL",
      "NONE"
   ],
   "Interval":30000000000,
   "Timeout":30000000000,
   "Retries":3
}
```

After a quick change, the correct output is now the result:
```
"Healthcheck":{
   "Test":[
      "NONE"
   ]
}
```

Additionally, I extracted the hard-coded strings that were used for
comparisons into constants in `libpod/define` to prevent a similar issue
from recurring.

Closes: containers#14493

Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants