Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running containers without defined or disabled health checks should not return "Health": {} #18792

Closed
onmomo opened this issue Jun 5, 2023 · 10 comments
Assignees
Labels
5.0 kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@onmomo
Copy link

onmomo commented Jun 5, 2023

Issue Description

When running a container with podman that does not define any health check, the following state is returned.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Run podman with disabled or no health checks in place
podman run --no-healthcheck -e POSTGRES_PASSWORD=password postgres:9.5
or since this image seems not to define any check
podman run -e POSTGRES_PASSWORD=password postgres:9.5
  1. Check the state
podman inspect $cid
"Health": {
                    "Status": "",
                    "FailingStreak": 0,
                    "Log": null
               },

Describe the results you received

podman inspect $cid
"Health": {
                    "Status": "",
                    "FailingStreak": 0,
                    "Log": null
               },

Describe the results you expected

As with docker, I would expect that there is no "Health": section in the first place to indicate that this container is not running any health checks

podman info output

podman info                                                                                   
host:
  arch: arm64
  buildahVersion: 1.30.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 91.22
    systemPercent: 2.77
    userPercent: 6.01
  cpus: 1
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: coreos
    version: "38"
  eventLogger: journald
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 501
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 6.2.15-300.fc38.aarch64
  linkmode: dynamic
  logDriver: journald
  memFree: 363335680
  memTotal: 2049069056
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.8.5-1.fc38.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.5
      commit: b6f80f766c9a89eb7b1440c0a70ab287434b17ed
      rundir: /run/user/501/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/501/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-12.fc38.aarch64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 0h 53m 21.00s
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 8
    paused: 0
    running: 1
    stopped: 7
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphRootAllocated: 106769133568
  graphRootUsed: 6792806400
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 69
  runRoot: /run/user/501/containers
  transientStore: false
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 4.5.0
  Built: 1681486872
  BuiltTime: Fri Apr 14 17:41:12 2023
  GitCommit: ""
  GoVersion: go1.20.2
  Os: linux
  OsArch: linux/arm64
  Version: 4.5.0

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

Yes

Additional environment details

Additional environment details

Additional information

similar looking

@onmomo onmomo added the kind/bug Categorizes issue or PR as related to a bug. label Jun 5, 2023
@github-actions github-actions bot added the remote Problem is in podman-remote label Jun 5, 2023
@Luap99 Luap99 added the Good First Issue This issue would be a good issue for a first time contributor to undertake. label Jun 6, 2023
rhatdan added a commit to rhatdan/podman that referenced this issue Jun 6, 2023
Fixed: containers#18792

This will match Docker behaviour.
`
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Jun 6, 2023
Fixed: containers#18792

This will match Docker behaviour.
`
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Jun 6, 2023
Fixed: containers#18792

This will match Docker behaviour.
`
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Jun 6, 2023
Fixed: containers#18792

This will match Docker behaviour.
`
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Jun 9, 2023
Fixed: containers#18792

This will match Docker behaviour.
`
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
rhatdan added a commit to rhatdan/podman that referenced this issue Jun 12, 2023
Fixed: containers#18792

This will match Docker behaviour.
`
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@Luap99
Copy link
Member

Luap99 commented Jun 13, 2023

Is this just cosmetic or do you have an actual tool that parses the json output and depends on it?

@onmomo
Copy link
Author

onmomo commented Jun 13, 2023

Is this just cosmetic or do you have an actual tool that parses the json output and depends on it?

Actually the later.

@Luap99
Copy link
Member

Luap99 commented Jun 13, 2023

Ok and would just showing "Health": {} in this case help you? Or does it not need to be included at all.

@onmomo
Copy link
Author

onmomo commented Jun 14, 2023

Ok and would just showing "Health": {} in this case help you? Or does it not need to be included at all.

The script checks if there is a "Health": section and then assumes a health check must be in place. I already patched and removed the health check from the third party script since I don't know another way to detect if there is a health check in place or not with podman.
Having said that, to maximise compatibility with existing tooling, I would vote for following the Docker implementation.

@Luap99
Copy link
Member

Luap99 commented Jun 14, 2023

The problem is we cannot do that without breaking the type which would break our stable API promise for pkg/bindings.
So we need to add a new struct just for that type in the frontend which is very ugly and a lot of duplicated code.
I think we should wait for podman 5.0 then just change the field to a pointer like the original path from @rhatdan did.

Right now you could check for .Config.Healthcheck which should achieve the same thing and should work for both docker and podman (at least on the version I tested).

@onmomo
Copy link
Author

onmomo commented Jun 14, 2023

The problem is we cannot do that without breaking the type which would break our stable API promise for pkg/bindings.

So we need to add a new struct just for that type in the frontend which is very ugly and a lot of duplicated code.

I think we should wait for podman 5.0 then just change the field to a pointer like the original path from @rhatdan did.

Right now you could check for .Config.Healthcheck which should achieve the same thing and should work for both docker and podman (at least on the version I tested).

Makes sense, thanks for the insights!

@Luap99 Luap99 added 5.0 and removed Good First Issue This issue would be a good issue for a first time contributor to undertake. remote Problem is in podman-remote labels Jun 14, 2023
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@mheon
Copy link
Member

mheon commented Feb 6, 2024

@ashley-cui Can this be closed?

@ashley-cui
Copy link
Member

Yep closed via #21388

@stale-locking-app stale-locking-app bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label May 7, 2024
@stale-locking-app stale-locking-app bot locked as resolved and limited conversation to collaborators May 7, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
5.0 kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants