You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When there's a permission issue when trying to reach the Kubernetes Api server k9s displays the "Error: pod command not found" message.
Ideally it should display a permission error instead.
To Reproduce
Steps to reproduce the behavior:
Try to connect to a cluster using invalid credentials. One way for me that worked was creating a new user in the kube config file like this:
And then referencing this user in the contexts: section of the kube config file. Then use that context with kubectl config use-context.
This is the error you should get when running kubectl get pods:
>$kubectl get pods
E0507 13:03:20.124339 88950 memcache.go:265] couldn't get current server API group list: unknown
E0507 13:03:20.220605 88950 memcache.go:265] couldn't get current server API group list: unknown
E0507 13:03:20.317574 88950 memcache.go:265] couldn't get current server API group list: unknown
E0507 13:03:20.411196 88950 memcache.go:265] couldn't get current server API group list: unknown
E0507 13:03:20.508126 88950 memcache.go:265] couldn't get current server API group list: unknown
Error from server (Forbidden): unknown
If I now run k9s I get the following error:
>$k9s
Error: `pod` command not found
Usage:
k9s [flags]
k9s [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
help Help about any command
info List K9s configurations info
version Print version/build info
Flags:
-A, --all-namespaces Launch K9s in all namespaces
--as string Username to impersonate for the operation
--as-group stringArray Group to impersonate for the operation
--certificate-authority string Path to a cert file for the certificate authority
--client-certificate string Path to a client certificate file for TLS
--client-key string Path to a client key file for TLS
--cluster string The name of the kubeconfig cluster to use
-c, --command string Overrides the default resource to load when the application launches
--context string The name of the kubeconfig context to use
--crumbsless Turn K9s crumbs off
--headless Turn K9s header off
-h, --help help for k9s
--insecure-skip-tls-verify If true, the server's caCertFile will not be checked for validity
--kubeconfig string Path to the kubeconfig file to use for CLI requests
--logFile string Specify the log file (default "/Users/asteppat/Library/Application Support/k9s/k9s.log")
-l, --logLevel string Specify a log level (info, warn, debug, trace, error) (default "info")
--logoless Turn K9s logo off
-n, --namespace string If present, the namespace scope for this CLI request
--readonly Sets readOnly mode by overriding readOnly configuration setting
-r, --refresh int Specify the default refresh rate as an integer (sec) (default 2)
--request-timeout string The length of time to wait before giving up on a single server request
--screen-dump-dir string Sets a path to a dir for a screen dumps
--token string Bearer token for authentication to the API server
--user string The name of the kubeconfig user to use
--write Sets write mode by overriding the readOnly configuration setting
Use "k9s [command] --help" for more information about a command.
panic: `pod` command not found
goroutine 1 [running]:
github.com/derailed/k9s/cmd.Execute()
github.com/derailed/k9s/cmd/root.go:72 +0x80
main.main()
github.com/derailed/k9s/main.go:32 +0x1c
Expected behavior
I would expect to get a permission error instead.
Versions (please complete the following information):
OS: macOS 14.4.1
K9s: v0.32.4
K8s:
Client Version: v1.28.1
Server Version: v1.23.5
The text was updated successfully, but these errors were encountered:
I'm getting this even though kubectl get pods works fine.
This was because I have added a namespaces config in my main config.yaml file. I thought I can get namespace favourites for all of my clusters. This is not supposed to work, but apparently something happened and I had this error on any cluster. I've reverted the bad config and now it works.
PS: The amount of panic I had without k9s really drives home how much I love this tool!
Describe the bug
When there's a permission issue when trying to reach the Kubernetes Api server k9s displays the "Error:
pod
command not found" message.Ideally it should display a permission error instead.
To Reproduce
Steps to reproduce the behavior:
Try to connect to a cluster using invalid credentials. One way for me that worked was creating a new user in the kube config file like this:
And then referencing this user in the
contexts:
section of the kube config file. Then use that context withkubectl config use-context
.This is the error you should get when running
kubectl get pods
:If I now run
k9s
I get the following error:Expected behavior
I would expect to get a permission error instead.
Versions (please complete the following information):
The text was updated successfully, but these errors were encountered: