-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crashed when trying to view logs #392
Comments
@lakaf Thank you for your great report! I think the issue here might be caused by RBAC restrictions on your cluster. Do you know if your user have the ability to get, list, watch for pod resources? Could you send us more details about this? Regardless, it is indeed a bug in K9s so thank you for filing this! |
Hi @derailed , I think this user does not have access to "namespaces", could this be the cause? I saw some errors on this earlier in the log file: 12:17PM ERR Checking NS Access error="namespaces "xxxxxxxx" is forbidden: User "system:serviceaccount:xxxxxxxx:yyyyyyyy" cannot get resource "namespaces" in API group "" in the namespace "xxxxxxxx"" 12:17PM ERR CRDs load fail error="customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:xxxxxxx:yyyyyy" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope" |
Also this tool is awesome thank you for your hard work @derailed and looking forward to see it getting better! |
Hi @lakaf Thank you for your kind words and for reporting back! If you start K9s with a -n for the namespace you do have access to, can you get to the containers logs then? |
Sorry for the late reply, I tried with -n param but it's still getting same error when trying to view logs. Hope it helps! |
Hi! Just an update, I tried with latest 0.9.3 release, it doesn't crash any more (good!), but I got this and also in the log I got this: 1:08PM ERR Invalid informer error="Invalid informer" My user can see logs by using kubectl logs command. |
@lakaf Thank you for reporting back! Let's try this. # Start K9s with the namespace you do have access to say ns=fred
k9s -n fred -l debug
# Get the log location
k9s info
# Grab the location of the K9s logs and tail the logs
# Now in K9s navigate to pods, select one and look at the container logs
# What do you see in the K9s logs? |
Hi @derailed, I got these the moment I hit l
|
@lakaf Could you share your rbac policy for this user? If you've started K9s in the namespace the user actually has access to, based on what logs you've shared I am guessing that this user does not have watch verb for that namespace. |
@derailed here it is:
|
I have similar problem. Logs are here:
Problem occurs only on our production environment (more secured), there is no problem on staging (less restricted). Could you confirm which verb for which resource is required? Is it BTW. This tool is really cool :) |
@lakaf @pawel-buczkowski-payu Thank you so much for the extra info! @lakaf Let's try this provided you can update your rbac policies, add this rule to your ci-role.
@pawel-buczkowski-payu Can you scroll up on your logs? You should have other errors above that grep for |
@derailed Thanks for your quick answer. In fact I found |
What works is either |
Describe the bug
K9s crashed when trying to view pod log, showing error:
Boom!! runtime error: invalid memory address or nil pointer dereference.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
See logs of the chosen pod.
Versions (please complete the following information):
Additional context
Logs generated from debug mode:
The text was updated successfully, but these errors were encountered: