Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crashed when trying to view logs #392

Closed
lakaf opened this issue Oct 29, 2019 · 16 comments
Closed

Crashed when trying to view logs #392

lakaf opened this issue Oct 29, 2019 · 16 comments
Labels
bug Something isn't working

Comments

@lakaf
Copy link

lakaf commented Oct 29, 2019




Describe the bug
K9s crashed when trying to view pod log, showing error:
Boom!! runtime error: invalid memory address or nil pointer dereference.

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'Pod view
  2. Click on any pod
  3. Hit l
  4. See error

Expected behavior
See logs of the chosen pod.

Versions (please complete the following information):

  • OS: OSX 10.14.6
  • K9s 0.9.2
  • K8s client 1.15.1, K8s server 1.13.12

Additional context
Logs generated from debug mode:

12:11PM WRN NodeMetrics &errors.errorString{s:"Invalid informer"}
12:11PM WRN No plugin configuration found
12:11PM WRN No plugin configuration found
12:11PM WRN No plugin configuration found
12:11PM WRN NodeMetrics &errors.errorString{s:"Invalid informer"}
12:11PM DBG Cluster updater canceled!
12:11PM DBG po updater canceled!
12:11PM ERR Boom! runtime error: invalid memory address or nil pointer dereference
12:11PM ERR goroutine 1 [running]:
runtime/debug.Stack(0x3346a80, 0x2460b03, 0x0)
/usr/local/Cellar/go/1.13/libexec/src/runtime/debug/stack.go:24 +0x9d
github.com/derailed/k9s/cmd.run.func1()
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/cmd/root.go:64 +0x181
panic(0x227a0c0, 0x332aa40)
/usr/local/Cellar/go/1.13/libexec/src/runtime/panic.go:679 +0x1b2
github.com/derailed/tview.(*Application).Run.func1(0xc0005e1e00)
/Users/fernand/go_wk/derailed/pkg/mod/github.com/derailed/tview@v0.2.4/application.go:149 +0x82
panic(0x227a0c0, 0x332aa40)
/usr/local/Cellar/go/1.13/libexec/src/runtime/panic.go:679 +0x1b2
github.com/derailed/k9s/internal/watch.(*Informer).Get(0x0, 0x245ccaf, 0x2, 0xc0009a0540, 0x26, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/watch/informer.go:133 +0x37
github.com/derailed/k9s/internal/resource.(*Pod).PodLogs(0xc00000dd00, 0x26aefc0, 0xc0000e7d00, 0xc000ccd020, 0xc000846540, 0xb, 0xc00084654c, 0x1a, 0x0, 0x0, ...)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/resource/pod.go:123 +0x116
github.com/derailed/k9s/internal/resource.(*Pod).Logs(0xc00000dd00, 0x26aefc0, 0xc0000e7d00, 0xc000ccd020, 0xc000846540, 0xb, 0xc00084654c, 0x1a, 0x0, 0x0, ...)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/resource/pod.go:157 +0x218
github.com/derailed/k9s/internal/views.(*logsView).doLoad(0xc0005e68a0, 0xc000846540, 0x26, 0x0, 0x0, 0x0, 0xc0005b8660, 0xc)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/views/logs.go:116 +0x35e
github.com/derailed/k9s/internal/views.(*logsView).load(0xc0005e68a0, 0x0, 0x0, 0x26c1200)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/views/logs.go:87 +0x63
github.com/derailed/k9s/internal/views.(*logsView).reload(0xc0005e68a0, 0x0, 0x0, 0x269c540, 0xc0000b7b50, 0xc00000de00)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/views/logs.go:55 +0x130
github.com/derailed/k9s/internal/views.(*podView).showLogs(0xc0000b7b50, 0xc000846540, 0x26, 0x0, 0x0, 0x269c540, 0xc0000b7b50, 0xc000198b00)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/views/pod.go:157 +0xa2
github.com/derailed/k9s/internal/views.(*podView).viewLogs(0xc0000b7b50, 0x3334c00, 0xc000198be0)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/views/pod.go:150 +0x121
github.com/derailed/k9s/internal/views.(*podView).logsCmd(...)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/views/pod.go:131
github.com/derailed/k9s/internal/ui.(*Table).keyboard(0xc000460000, 0xc0001dc100, 0xc00054b920)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/ui/table.go:187 +0x214
github.com/derailed/tview.(*Box).WrapInputHandler.func1(0xc0001dc100, 0xc0006a82c0)
/Users/fernand/go_wk/derailed/pkg/mod/github.com/derailed/tview@v0.2.4/box.go:161 +0x75
github.com/derailed/tview.(*Application).Run(0xc0005e1e00, 0x0, 0x0)
/Users/fernand/go_wk/derailed/pkg/mod/github.com/derailed/tview@v0.2.4/application.go:234 +0x415
github.com/derailed/k9s/internal/views.(*appView).Run(0xc0005303c0)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/internal/views/app.go:309 +0x122
github.com/derailed/k9s/cmd.run(0x3336040, 0xc0000c8160, 0x0, 0x2)
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/cmd/root.go:77 +0x131
github.com/spf13/cobra.(*Command).execute(0x3336040, 0xc0000bc040, 0x2, 0x2, 0x3336040, 0xc0000bc040)
/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:830 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x3336040, 0x0, 0x0, 0x0)
/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
/Users/fernand/go_wk/derailed/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
github.com/derailed/k9s/cmd.Execute()
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/cmd/root.go:54 +0x2d
main.main()
/Users/fernand/go_wk/derailed/src/github.com/derailed/k9s/main.go:26 +0x1a6

@derailed derailed added the bug Something isn't working label Oct 29, 2019
@derailed
Copy link
Owner

@lakaf Thank you for your great report! I think the issue here might be caused by RBAC restrictions on your cluster. Do you know if your user have the ability to get, list, watch for pod resources? Could you send us more details about this? Regardless, it is indeed a bug in K9s so thank you for filing this!

@lakaf
Copy link
Author

lakaf commented Oct 29, 2019

Hi @derailed , I think this user does not have access to "namespaces", could this be the cause? I saw some errors on this earlier in the log file:

12:17PM ERR Checking NS Access error="namespaces "xxxxxxxx" is forbidden: User "system:serviceaccount:xxxxxxxx:yyyyyyyy" cannot get resource "namespaces" in API group "" in the namespace "xxxxxxxx""

12:17PM ERR CRDs load fail error="customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:xxxxxxx:yyyyyy" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope"

@lakaf
Copy link
Author

lakaf commented Oct 29, 2019

Also this tool is awesome thank you for your hard work @derailed and looking forward to see it getting better!

@derailed
Copy link
Owner

Hi @lakaf Thank you for your kind words and for reporting back!

If you start K9s with a -n for the namespace you do have access to, can you get to the containers logs then?

derailed added a commit that referenced this issue Oct 30, 2019
@lakaf
Copy link
Author

lakaf commented Oct 30, 2019

Sorry for the late reply, I tried with -n param but it's still getting same error when trying to view logs. Hope it helps!

@lakaf
Copy link
Author

lakaf commented Oct 30, 2019

Hi! Just an update, I tried with latest 0.9.3 release, it doesn't crash any more (good!), but I got this

image

and also in the log I got this:

1:08PM ERR Invalid informer error="Invalid informer"
1:08PM WRN NodeMetrics &errors.errorString{s:"Invalid informer"}

My user can see logs by using kubectl logs command.

@derailed
Copy link
Owner

@lakaf Thank you for reporting back! Let's try this.

# Start K9s with the namespace you do have access to say ns=fred
k9s -n fred -l debug
# Get the log location
k9s info
# Grab the location of the K9s logs and tail the logs
# Now in K9s navigate to pods, select one and look at the container logs
# What do you see in the K9s logs? 

@lakaf
Copy link
Author

lakaf commented Oct 31, 2019

Hi @derailed, I got these the moment I hit l

10:19AM ERR Invalid informer error="Invalid informer"
10:19AM DBG Closed channel detected. Bailing out...
10:19AM DBG LOG LINES 1
10:19AM DBG Switching page to logs
10:19AM DBG updateLogs view bailing out!
10:19AM DBG po updater canceled!

@derailed
Copy link
Owner

@lakaf Could you share your rbac policy for this user? If you've started K9s in the namespace the user actually has access to, based on what logs you've shared I am guessing that this user does not have watch verb for that namespace.

@lakaf
Copy link
Author

lakaf commented Nov 4, 2019

@derailed here it is:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
       {"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"Role","metadata":{"annotations":{},"name":"ci-role","namespace":"xxxxxxxxxxx"},"rules":[{"apiGroups":["","extensions","apps","batch"],"resources":["deployments","replicasets","pods","configmaps","secrets","services","ingresses","crontabs","cronjobs","jobs"],"verbs":["get","list","watch","create","edit","update","patch","delete"]},{"apiGroups":[""],"resources":["pods","pods/attach","pods/exec","pods/portforward","pods/proxy","persistentvolumeclaims"],"verbs":["create","delete","deletecollection","get","list","patch","update","watch"]},{"apiGroups":["apps"],"resources":["daemonsets","deployments","deployments/rollback","deployments/scale","replicasets","replicasets/scale","statefulsets"],"verbs":["create","delete","deletecollection","get","list","patch","update","watch"]},{"apiGroups":[""],"resources":["namespaces/status","pods/log","pods/status","replicationcontrollers/status","resourcequotas/status","events"],"verbs":["get","list","watch"]},{"apiGroups":["autoscaling"],"resources":["horizontalpodautoscalers"],"verbs":["create","delete","deletecollection","get","list","patch","update","watch"]},{"apiGroups":["argoproj.io"],"resources":["workflows"],"verbs":["get","list","watch","create","update","patch","delete"]}]}
  creationTimestamp: "2019-09-06T18:29:54Z"
  name: ci-role
  namespace: xxxxxxxxxxx
  resourceVersion: "21243200"
  selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/xxxxxxxxxxx/roles/ci-role
  uid: 521b4f90-d0d4-11e9-a6a7-065e840fbfc0
rules:
- apiGroups:
  - ""
  - extensions
  - apps
  - batch
  resources:
  - deployments
  - replicasets
  - pods
  - configmaps
  - secrets
  - services
  - ingresses
  - crontabs
  - cronjobs
  - jobs
  verbs:
  - get
  - list
  - watch
  - create
  - edit
  - update
  - patch
  - delete
- apiGroups:
  - ""
  resources:
  - pods
  - pods/attach
  - pods/exec
  - pods/portforward
  - pods/proxy
  - persistentvolumeclaims
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - apps
  resources:
  - daemonsets
  - deployments
  - deployments/rollback
  - deployments/scale
  - replicasets
  - replicasets/scale
  - statefulsets
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - namespaces/status
  - pods/log
  - pods/status
  - replicationcontrollers/status
  - resourcequotas/status
  - events
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - argoproj.io
  resources:
  - workflows
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - patch
  - delete
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  verbs:
  - create
  - top
  - get
  - list
  - patch
  - update
  - watch

@pawel-buczkowski-payu
Copy link

pawel-buczkowski-payu commented Nov 13, 2019

I have similar problem. Logs are here:

1:45PM DBG Namespace did not change XXX
1:45PM DBG Setting active ns "XXX"
1:45PM DBG deploy updater canceled!
1:45PM WRN NodeMetrics &errors.errorString{s:"Invalid informer"}
1:45PM WRN No plugin configuration found
1:45PM ERR Unable to retrieve pods Invalid informer error="Invalid informer"

Problem occurs only on our production environment (more secured), there is no problem on staging (less restricted). Could you confirm which verb for which resource is required? Is it watch namespace?

BTW. This tool is really cool :)

@derailed
Copy link
Owner

derailed commented Nov 13, 2019

@lakaf @pawel-buczkowski-payu Thank you so much for the extra info!

@lakaf Let's try this provided you can update your rbac policies, add this rule to your ci-role.
I think the issue here is the role does not have access to get the namespace xxx and hence the logs are failing in your case.

  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get

@pawel-buczkowski-payu Can you scroll up on your logs? You should have other errors above that grep for Checking NS Access. I am guessing your user does not have get access to the namespace you are targeting or all-namespaces. K9s ideally need get/list for namespace resources.

@pawel-buczkowski-payu
Copy link

@derailed Thanks for your quick answer. In fact I found Checking NS Access error. Then we added get namespaces and now everything works fine :)
Thanks!

@jmem0120
Copy link

@lakaf
@derailed

I got your problem in version 0.9.3
I am trying to add -n username after k9s and it have no error when view logs.
Hope u can do it as me.

@fardin01
Copy link

What works is either k9s -n <namespace>, or set the kubectl current context to the desired cluster and/or namespace and then start k9s.

@derailed
Copy link
Owner

@lakaf @fardin01 Thank you all for reporting back!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants