Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman pod logs stopped working for pods started with podman-kube@.service #17482

Closed
E1k3 opened this issue Feb 12, 2023 · 15 comments · Fixed by #17548
Closed

podman pod logs stopped working for pods started with podman-kube@.service #17482

E1k3 opened this issue Feb 12, 2023 · 15 comments · Fixed by #17548
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@E1k3
Copy link

E1k3 commented Feb 12, 2023

Issue Description

Since 68fbebf, podman kube play uses --log-driver=passthrough by default if --service-container=true (regardless of the specified log_driver setting in containers.conf).
Because of this, podman pod logs has stopped working for all pods started using this method.

Of course one can specify --log-driver=journald, but that is not possible if the provided podman-kube@.service is used to start these pods.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Start pod using podman kube play
  2. Try to read logs using podman pod logs

Describe the results you received

Error: this container is using the 'passthrough' log driver, cannot read logs: this container is not logging output

Describe the results you expected

The logs of all containers in the pod, as usual.

podman info output

host:
  arch: amd64
  buildahVersion: 1.29.0
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.1.6-1
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: 158b5421dbac6bda96b1457955cf2e3c34af29bc'
  cpuUtilization:
    idlePercent: 99.34
    systemPercent: 0.32
    userPercent: 0.34
  cpus: 16
  distribution:
    distribution: arch
    version: unknown
  eventLogger: journald
  hostname: stage-vm
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 6.1.11-arch1-1
  linkmode: dynamic
  logDriver: journald
  memFree: 13604532224
  memTotal: 16771158016
  networkBackend: cni
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 1.8-1
    path: /usr/bin/crun
    version: |-
      crun version 1.8
      commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.0-1
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 0
  swapTotal: 0
  uptime: 0h 17m 47.00s
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/eike/.config/containers/storage.conf
  containerStore:
    number: 25
    paused: 0
    running: 25
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/eike/.local/share/containers/storage
  graphRootAllocated: 134680154112
  graphRootUsed: 78253633536
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 44
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/eike/.local/share/containers/storage/volumes
version:
  APIVersion: 4.4.1
  Built: 1676117906
  BuiltTime: Sat Feb 11 13:18:26 2023
  GitCommit: 34e8f3933242f2e566bbbbf343cf69b7d506c1cf-dirty
  GoVersion: go1.20
  Os: linux
  OsArch: linux/amd64
  Version: 4.4.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

No response

Additional information

No response

@E1k3 E1k3 added the kind/bug Categorizes issue or PR as related to a bug. label Feb 12, 2023
@E1k3 E1k3 changed the title podman pod logs stops working for pods started with podman kube play podman pod logs stopped working for pods started with podman kube play Feb 12, 2023
@E1k3 E1k3 changed the title podman pod logs stopped working for pods started with podman kube play podman pod logs stopped working for pods started with podman kube play / podman-kube@.service Feb 12, 2023
@E1k3 E1k3 changed the title podman pod logs stopped working for pods started with podman kube play / podman-kube@.service podman pod logs stopped working for pods started with podman-kube@.service Feb 12, 2023
@rhatdan
Copy link
Member

rhatdan commented Feb 13, 2023

@vrothberg PTAL

@vrothberg
Copy link
Member

@vrothberg PTAL

That was a deliberate change for Quadlet which enforces the passthrough driver when running in systemd. @Luap99 suggested to update podman logs to handle the passthrough driver which will fix the issue.

@Luap99
Copy link
Member

Luap99 commented Feb 13, 2023

see #17348

I can work on it this week if we think this is a priority. However I don't think it can truly replace the proper podman logs. Logs work on per container basis, with passthrough it is impossible to filter by container and we would only show all pods/containers at once AFAICT which may not be suitable for some use cases.

@rhatdan
Copy link
Member

rhatdan commented Feb 14, 2023

I think this is a priority.

I think we should document the shortcoming.

@vrothberg
Copy link
Member

There are a number of things:

  • The podman-kube template regressed and I think we should fix it
  • There is no documentation on how to work with passthrough
  • passthrough errors on podman logs and we may be able to improve on that
  • Quadlet enforces passthrough and there's no way (I know of) to override that manually or in containers.conf

@Luap99
Copy link
Member

Luap99 commented Feb 14, 2023

#17502 to make podman logs working with passthrough and systemd.

@ygalblum
Copy link
Collaborator

@vrothberg

  1. Do you want to change the template to set the log-driver in order to revert the regression?
  2. Do we want to add support for the LogDriver key for both .kube and .container unit files?

@vrothberg
Copy link
Member

@vrothberg

1. Do you want to change the template to set the log-driver in order to revert the regression?

I am not yet sure how to fix it. But the podman-kube template should behave as before 4.4. It would further be nice if Quadlet supported other log drivers. One behavior I have in mind is for Quadlet to only use passthrough if no log-driver is explicitly set in containers.conf or in .container file.

2. Do we want to add support for the `LogDriver` key for both `.kube` and `.container` unit files?

Nice idea!

@ygalblum
Copy link
Collaborator

I think that if you want to revert the regression, the simplest fix would be to add log-driver=journald to the podman-kube@.service template.
As for Quadlet, today, it does not look into container.conf file at all. I think the right solution is to add the support in.kube and .container files. If it is not set, then passthrough will be used (either by explicitly setting it in .container files or leaving it empty in .kube files)

@vrothberg
Copy link
Member

I think that if you want to revert the regression, the simplest fix would be to add log-driver=journald to the podman-kube@.service template.

That is a nice idea!

@rhatdan
Copy link
Member

rhatdan commented Feb 16, 2023

I agree add support for .container and .cube to set this field.

@Luap99
Copy link
Member

Luap99 commented Feb 16, 2023

I think that if you want to revert the regression, the simplest fix would be to add log-driver=journald to the podman-kube@.service template.

That is a nice idea!

That will still ignore containers.conf, if previously k8s-file was set it is still a regression for a user

@E1k3
Copy link
Author

E1k3 commented Feb 16, 2023

I think that if you want to revert the regression, the simplest fix would be to add log-driver=journald to the podman-kube@.service template.

That is a nice idea!

That will still ignore containers.conf, if previously k8s-file was set it is still a regression for a user

This is what's problematic in general, when changing defaults for certain use cases.
Same as with this issue, adding a different default should probably also result in an additional setting in the config.
Example container.conf:

# Log driver used by the container, available options: "passthrough", "journald", [...]
# defaults to "journald"
log_driver = <user-setting>

# Log driver used by containers created with podman kube play, available options "adopt", "passthrough", "journald", [...]
# adopt: Adopts setting from log_driver
# defaults to "passthrough"
kube_log_driver = <user-setting>

You decided that the general case is significantly different from the case in podman kube play (otherwise you wouldn't have changed the defaults), so there should be a possibility to configure them individually in container.conf as well.
Otherwise, it would never be possible for podman kube play to respect the config and for the user to set something other than passthrough for "normal" containers at the same time as using podman kube play with passthrough.

At least that's how it looks for me, but I might as well have missed something.

@vrothberg
Copy link
Member

I'll see what we can do. I don't think that a new option is needed. Quadlet will default to passthrough which is fine given it's a new tool. The podman-kube template issue should be fixed though.

vrothberg added a commit to vrothberg/libpod that referenced this issue Feb 17, 2023
Only enforce the passthrough log driver for Quadlet. Commit 68fbebf
introduced a regression on the `podman-kube@` template as `podman logs`
stopped working and settings from containers.conf were ignored.

Fixes: containers#17482
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
@vrothberg
Copy link
Member

I opened #17548 to fix the issue.

openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/podman that referenced this issue Feb 17, 2023
Only enforce the passthrough log driver for Quadlet. Commit 68fbebf
introduced a regression on the `podman-kube@` template as `podman logs`
stopped working and settings from containers.conf were ignored.

Fixes: containers#17482
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/podman that referenced this issue Feb 20, 2023
Only enforce the passthrough log driver for Quadlet. Commit 68fbebf
introduced a regression on the `podman-kube@` template as `podman logs`
stopped working and settings from containers.conf were ignored.

Fixes: containers#17482
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 1, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 1, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants