Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UI display messed up #448

Closed
HawickMason opened this issue Dec 26, 2019 · 5 comments
Closed

UI display messed up #448

HawickMason opened this issue Dec 26, 2019 · 5 comments
Labels
bug Something isn't working

Comments

@HawickMason
Copy link




Describe the bug
The GUI of my k9s is all messed up, with a 'space' on the border of 'Resource View', as shown in the pic
*
image

To Reproduce*
Steps to reproduce the behavior:
Just start your k9s , goes into a pod to see the log and return to the main page, or press the ':' to fire up the command mode will results in the picture above

Expected behavior
A neat & clear UI display as shown in the doc and the demo videos

Screenshots
See picture above

Versions (please complete the following information):

  • OS: OSX - High Serria 10.13.6
  • K9s : 0.9.3
  • K8s : v1.13.10

Additional context
Same thing happened in the previous release of k9s on my computer.
FYI: My terminal immulator is Iterm2

@HawickMason HawickMason changed the title UI display messed UI display messed up Dec 26, 2019
@derailed derailed added the bug Something isn't working label Dec 26, 2019
@derailed
Copy link
Owner

@HawickMason Thank you for the report! Looks like something is indeed toast with the display but hard to tell from the pic. This is a pretty common scenario so I am guessing an issue specific to your pod or cluster. Could you see if there is anything in the K9s logs? Also does this happen on any pods or specific ones? Thank you for the details!

@HawickMason
Copy link
Author

Thanks for your time~
Firstly, the k9s I used is the binary distributed tarball which I download in the 'Release' page, and here are the updates on this Issue

Updates:

  1. Does this happen on any pods or specific ones?
    It happens on every pod. Also, it is unlikely to be a 'pod or cluster' specific Issue, cause my collegue is working just fine with k9s on the same cluster......(Of cource, he is using the Brew installed version of k9s)

  2. Any detailed log?
    12:55PM INF 🐶 K9s starting up...
    12:55PM INF ✅ Kubernetes connectivity 12:55PM INF No skin file found. Loading stock skins.
    12:55PM INF No benchmark config file found, using defaults. error="open /Users/mason/.k9s/bench-kubernetes.yml: no such file or directory"
    12:55PM INF No namespace specified using all namespaces
    Log file created at: 2019/12/26 12:55:51
    Running on machine: xxxxx
    Binary: Built with gc go1.13 for darwin/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    12:55PM ERR No custom aliases defined in config error="open /Users/mason/.k9s/alias.yml: no such file or directory"
    12:55PM WRN No plugin configuration found
    12:55PM WRN No plugin configuration found
    E1226 12:55:51.203079 15576 reflector.go:123] pkg/mod/k8s.io/client-go@v0.0.0-20190918160344-1fbdaa4c8d90/tools/cache/reflector.go:96: Failed to list *v1beta1.PodMetrics: the s
    erver is currently unable to handle the request (get pods.metrics.k8s.io)
    E1226 12:55:51.203079 15576 reflector.go:123] pkg/mod/k8s.io/client-go@v0.0.0-20190918160344-1fbdaa4c8d90/tools/cache/reflector.go:96: Failed to list *v1beta1.NodeMetrics: the
    server is currently unable to handle the request (get nodes.metrics.k8s.io)
    E1226 12:55:52.211028 15576 reflector.go:123] pkg/mod/k8s.io/client-go@v0.0.0-20190918160344-1fbdaa4c8d90/tools/cache/reflector.go:96: Failed to list *v1beta1.PodMetrics: the s
    erver is currently unable to handle the request (get pods.metrics.k8s.io)
    E1226 12:55:52.216327 15576 reflector.go:123] pkg/mod/k8s.io/client-go@v0.0.0-20190918160344-1fbdaa4c8d90/tools/cache/reflector.go:96: Failed to list *v1beta1.NodeMetrics: the
    server is currently unable to handle the request (get nodes.metrics.k8s.io)
    E1226 12:55:53.221057 15576 reflector.go:123] pkg/mod/k8s.io/client-go@v0.0.0-20190918160344-1fbdaa4c8d90/tools/cache/reflector.go:96: Failed to list *v1beta1.PodMetrics: the s
    erver is currently unable to handle the request (get pods.metrics.k8s.io)
    E1226 12:55:53.225817 15576 reflector.go:123] pkg/mod/k8s.io/client-go@v0.0.0-20190918160344-1fbdaa4c8d90/tools/cache/reflector.go:96: Failed to list *v1beta1.NodeMetrics: the
    server is currently unable to handle the request (get nodes.metrics.k8s.io)
    E1226 12:55:54.236264 15576 reflector.go:123] pkg/mod/k8s.io/client-go@v0.0.0-20190918160344-1fbdaa4c8d90/tools/cache/reflector.go:96: Failed to list *v1beta1.PodMetrics: the s
    erver is currently unable to handle the request (get pods.metrics.k8s.io)
    E1226 12:55:54.236386 15576 reflector.go:123] pkg/mod/k8s.io/client-go@v0.0.0-20190918160344-1fbdaa4c8d90/tools/cache/reflector.go:96: Failed to list *v1beta1.NodeMetrics: the
    server is currently unable to handle the request (get nodes.metrics.k8s.io)
    E1226 12:55:55.242365 15576 reflector.go:123] pkg/mod/k8s.io/client-go@v0.0.0-20190918160344-1fbdaa4c8d90/tools/cache/reflector.go:96: Failed to list *v1beta1.PodMetrics: the s
    erver is currently unable to handle the request (get pods.metrics.k8s.io)

Does that ring any bell? Looking forward for your reply again~

Thanks again for what you've done~

@derailed
Copy link
Owner

@HawickMason Thank you for posting back and this great report! I am not seeing anything in this log dump. It looks like the metrics server is not running on your cluster but that should not trigger the behavior. Could you try running k9s with debug log level as so k9s -l debug and see if these is anything specific when you select a pod and view the logs and back? Also if you hit deployments or services and view the logs, does the display gets messed up too? Thank you!

@HawickMason
Copy link
Author

Updates:
Since k9s -l debug still shows nothing wrong in the log, I figured it may be something ralated to the UI components you are using.
And indeed, it is the tview you are using that is causing the trouble with me.....

Relate issue:
rivo/tview#118

And whne I set the LC_CYTPE to anything other than zh_CN.UTF-8, the UI looks clean and neat and everything works fine~

So I guess, I can close this issue now, and hopes anyone else who is encountering the problem like me can fix this

@derailed
Copy link
Owner

@HawickMason Thank you for researching this and posting the details here! Good to know...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants