Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman logs: --follow does not always work in combination with --since #16950

Closed
marhkb opened this issue Dec 27, 2022 · 2 comments
Closed

podman logs: --follow does not always work in combination with --since #16950

marhkb opened this issue Dec 27, 2022 · 2 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@marhkb
Copy link

marhkb commented Dec 27, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Logs are not always followed when --follow is used in combination with --since. It seems that a certain threshold must be hit to make it follow the logs

Steps to reproduce the issue:

  1. Start a container that logs something every 5 seconds:
$ podman run --name ticker alpine sh -c "while :; do echo "tick" && sleep 5; done"
  1. Wait approximately 10 seconds
  2. Now try to retrieve the logs since 1 sec ago and keep following it by issuing
$ podman logs --timestamps --follow --since $(date +%s -d "1 sec ago") ticker
  1. The logs are not followed.
  2. Repeat step 3 but multiply the seconds by 10.

Here is a video depicting the issue:

Bildschirmaufzeichnung.vom.2022-12-27.23-30-01.webm

Describe the results you received:
Following the logs doesn't work as long as the timestamp passed in --since is after the timestamp of the first log line of the container. In this specific case, 100 seconds should work (see step 3):

$ podman logs --timestamps --follow --since $(date +%s -d "100 sec ago") ticker

Describe the results you expected:
The logs are being followed if I specify --follow, no matter what.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Client:       Podman Engine
Version:      4.3.1
API Version:  4.3.1
Go Version:   go1.19.2
Built:        Fri Nov 11 16:01:27 2022
OS/Arch:      linux/amd64

Output of podman info:

host:
  arch: amd64
  buildahVersion: 1.28.0
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.5-1.fc37.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.5, commit: '
  cpuUtilization:
    idlePercent: 75.73
    systemPercent: 5.07
    userPercent: 19.2
  cpus: 16
  distribution:
    distribution: fedora
    variant: silverblue
    version: "37"
  eventLogger: journald
  hostname: asus-tuf-gaming-X570-plus
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.0.14-300.fc37.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 3759407104
  memTotal: 33560997888
  networkBackend: cni
  ociRuntime:
    name: crun
    package: crun-1.7.2-2.fc37.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.7.2
      commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-8.fc37.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 1313239040
  swapTotal: 8589930496
  uptime: 34h 4m 30.00s (Approximately 1.42 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/marcus/.config/containers/storage.conf
  containerStore:
    number: 13
    paused: 3
    running: 6
    stopped: 4
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/marcus/.local/share/containers/storage
  graphRootAllocated: 3248905781248
  graphRootUsed: 487188070400
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 19
  runRoot: /run/user/1000/containers
  volumePath: /var/home/marcus/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 1668178887
  BuiltTime: Fri Nov 11 16:01:27 2022
  GitCommit: ""
  GoVersion: go1.19.2
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

Package info (e.g. output of rpm -q podman or apt list podman or brew info podman):

podman-4.3.1-1.fc37.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

I did check the troubleshooting page, but haven't checked the latest git version of podman.

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 27, 2022
@marhkb
Copy link
Author

marhkb commented Dec 28, 2022

This seems to be a duplicate of #14104. But it couldn't be reproduced back then.

@Luap99
Copy link
Member

Luap99 commented Jan 4, 2023

Looks like this only effects journald and not the k8s-file log driver.

@Luap99 Luap99 self-assigned this Jan 4, 2023
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 4, 2023
The `containerCouldBeLogging` bool should not be false by default, when
--since is used we seek in the journal and can miss the start event so
that bool would stay false forever. This means that a running container
is not followed even when it should.

To fix this we can just set the `containerCouldBeLogging` bool based on
the current contianer state.

Fixes containers#16950

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 4, 2023
The `containerCouldBeLogging` bool should not be false by default, when
--since is used we seek in the journal and can miss the start event so
that bool would stay false forever. This means that a running container
is not followed even when it should.

To fix this we can just set the `containerCouldBeLogging` bool based on
the current contianer state.

Fixes containers#16950

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 5, 2023
The `containerCouldBeLogging` bool should not be false by default, when
--since is used we seek in the journal and can miss the start event so
that bool would stay false forever. This means that a running container
is not followed even when it should.

To fix this we can just set the `containerCouldBeLogging` bool based on
the current contianer state.

Fixes containers#16950

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 5, 2023
The `containerCouldBeLogging` bool should not be false by default, when
--since is used we seek in the journal and can miss the start event so
that bool would stay false forever. This means that a running container
is not followed even when it should.

To fix this we can just set the `containerCouldBeLogging` bool based on
the current contianer state.

Fixes containers#16950

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 5, 2023
The `containerCouldBeLogging` bool should not be false by default, when
--since is used we seek in the journal and can miss the start event so
that bool would stay false forever. This means that a running container
is not followed even when it should.

To fix this we can just set the `containerCouldBeLogging` bool based on
the current contianer state.

Fixes containers#16950

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 5, 2023
The `containerCouldBeLogging` bool should not be false by default, when
--since is used we seek in the journal and can miss the start event so
that bool would stay false forever. This means that a running container
is not followed even when it should.

To fix this we can just set the `containerCouldBeLogging` bool based on
the current contianer state.

Fixes containers#16950

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 5, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 5, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

2 participants