Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Previous and next is not working under logs #6223

Open
johnr84 opened this issue Jul 1, 2021 · 15 comments
Open

Previous and next is not working under logs #6223

johnr84 opened this issue Jul 1, 2021 · 15 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@johnr84
Copy link

johnr84 commented Jul 1, 2021

Installation method:
Kubernetes version: 1.21.1
Dashboard version: 2.3.1
Operating system: Ubuntu 20.04.2

Steps to reproduce
Deploy Deployment with colored logs:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: color
spec:
  selector:
    matchLabels:
      app: color
  template:
    metadata:
      labels:
        app: color
    spec:
      containers:
      - image: ubuntu
        name: color
        command: ["/bin/bash", "-c", 'while true; do echo -e "Meet the \e[92mcucumber!"; sleep 1; done']

Go to pod logs. Click "<" or ">" button. It will show error as "The selected container has not logged any messages yet".

Observed result
image

Expected result
Both the next and previous buttons shows the respective logs

@johnr84 johnr84 added the kind/bug Categorizes issue or PR as related to a bug. label Jul 1, 2021
@floreks
Copy link
Member

floreks commented Jul 2, 2021

I have tested it using provided deployment yaml and haven't encountered this issue. Are you sure you didn't check Show previous logs option?

@johnr84
Copy link
Author

johnr84 commented Jul 2, 2021

@floreks

No I didn't choose the "Show previous logs" option. I just tested the same on a kind cluster on my MAC and there it works as expected(In chrome). I am facing this problem in chrome browser on windows, I am not sure what is the reason. As mentioned, it works fine in the old dashboard version - 2.0.1 on our k8s 1.18.3 cluster.

image

@floreks
Copy link
Member

floreks commented Jul 2, 2021

Are there any unusual logs in the dev console? Can you also check API calls?

@dano0b
Copy link

dano0b commented Jul 2, 2021

dev console is empty, no errors or warnings.

Every second click is creating a different query:

:method: GET
:path: /api/v1/log/default/color-55bdd5665b-jpjmb/color?logFilePosition=&referenceTimestamp=2021-07-02T13:24:08.961448936+02:00&referenceLineNum=-1&offsetFrom=2500&offsetTo=2600&previous=false
:method: GET
:path: /api/v1/log/default/color-55bdd5665b-jpjmb/color?logFilePosition=&referenceTimestamp=&referenceLineNum=0&offsetFrom=0&offsetTo=100&previous=false

@floreks
Copy link
Member

floreks commented Jul 2, 2021

And the response of this call is what you see in the log viewer?

@dano0b
Copy link

dano0b commented Jul 2, 2021

dashboard-empty-responses
every second is more or less empty:

{
 "info": {
  "podName": "color-55bdd5665b-jpjmb",
  "containerName": "color",
  "initContainerName": "",
  "fromDate": "",
  "toDate": "",
  "truncated": false
 },
 "selection": {
  "referencePoint": {
   "timestamp": "",
   "lineNum": 0
  },
  "offsetFrom": 0,
  "offsetTo": 0,
  "logFilePosition": ""
 },
 "logs": []
}

the healthy one:

{
 "info": {
  "podName": "color-55bdd5665b-jpjmb",
  "containerName": "color",
  "initContainerName": "",
  "fromDate": "2021-07-02T14:46:37.200487993+02:00",
  "toDate": "2021-07-02T14:48:16.360482711+02:00",
  "truncated": false
 },
 "selection": {
  "referencePoint": {
   "timestamp": "2021-07-02T14:06:33.223579256+02:00",
   "lineNum": -1
  },
  "offsetFrom": 2400,
  "offsetTo": 2500,
  "logFilePosition": ""
 },
 "logs": [
  {
   "timestamp": "2021-07-02T14:46:37.200487993+02:00",
   "content": "Meet the \u001b[92mcucumber!"
  },
...
}

@floreks
Copy link
Member

floreks commented Jul 2, 2021

Interesting. The parameters are missing. We have to find a way to reproduce this.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 30, 2021
@dano0b
Copy link

dano0b commented Sep 30, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 30, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2021
@dano0b
Copy link

dano0b commented Dec 29, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 29, 2022
@maciaszczykm
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 30, 2022
@ivyswen
Copy link

ivyswen commented Nov 3, 2022

/remove-lifecycle stale

@ivyswen
Copy link

ivyswen commented Nov 3, 2022

Installation method: kubeadm
Kubernetes version: 1.24.7
Dashboard version: 2.6.1
Operating system: CentOS Linux release 7.7.1908

Same problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

7 participants