Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusing error message when no kubeconfig is provided #918

Closed
mlevesquedion opened this issue Jan 15, 2021 · 9 comments
Closed

Confusing error message when no kubeconfig is provided #918

mlevesquedion opened this issue Jan 15, 2021 · 9 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mlevesquedion
Copy link

I observed this issue when using the latest kubectl, but I believe it is relevant to client-go (and maybe cli-runtime).

When running kubectl commands without a config file (no --kubeconfig flag, no KUBECONFIG environment variable, no ~/.kube/config), I get the following error message:

 The connection to the server localhost:8080 was refused - did you specify the right host or port?

Since I didn't provide any configuration, I would expect to see the following error message, defined in cli-runtime/pkg/genericclioptions/client_config.go:

var (
	ErrEmptyConfig = clientcmd.NewEmptyConfigError(`Missing or incomplete configuration info.  Please point to an existing, complete config file:


  1. Via the command-line flag --kubeconfig
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.`)
)

I did some digging, and it looks like because of the following code in client-go/pkg/tools/clientcmd/client_config.go ...

var (
	// ClusterDefaults has the same behavior as the old EnvVar and DefaultCluster fields
	// DEPRECATED will be replaced
	ClusterDefaults = clientcmdapi.Cluster{Server: getDefaultServer()}
	// DefaultClientConfig represents the legacy behavior of this package for defaulting
	// DEPRECATED will be replace
	DefaultClientConfig = DirectClientConfig{*clientcmdapi.NewConfig(), "", &ConfigOverrides{
		ClusterDefaults: ClusterDefaults,
	}, nil, NewDefaultClientConfigLoadingRules(), promptedCredentials{}}
)

// getDefaultServer returns a default setting for DefaultClientConfig
// DEPRECATED
func getDefaultServer() string {
	if server := os.Getenv("KUBERNETES_MASTER"); len(server) > 0 {
		return server
	}
	return "http://localhost:8080"
}

... the server defaults to http://localhost:8080 when creating the config, which prevents the config from being identified as empty later on.

Judging by the DEPRECATED in the comments, it looks like this behavior is going to change soon? Is there an ETA on this?

In any case, if this is indeed an issue and not WAI, I would be interested in helping with the fix.

@comerford
Copy link

Similarly, I received this error when I had neglected to set automountServiceAccountToken to true with the bitnami/kubectl container running in an existing cluster (all the environment variables were set correctly). I even wrote it up on serverfault to hopefully stop someone else from wasting hours over a cryptic error message:

https://serverfault.com/q/1050696/108132

If a valid config exists, it would be very helpful if the error message carped about the lack of credentials or at least in some way indicated that the connection to localhost:8080 is a failure after a fallback to defaults

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2021
@furkhat
Copy link

furkhat commented Apr 25, 2021

/remove-lifecycle stale

It's kind of weird to me that client go executes kubectl under the hood. Because If you build go program that uses client go and run it in an environment without kubectl it won't work

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2021
@liggitt
Copy link
Member

liggitt commented Apr 25, 2021

If you build go program that uses client go and run it in an environment without kubectl it won't work

do you have more details of what you are observing? client-go does not invoke kubectl. it's possible some of the error messages when misconfigured reference command line flags that are typically associated with kubectl, but there's no runtime dependency on a kubectl binary.

@furkhat
Copy link

furkhat commented Apr 29, 2021

Turns out in my case it was expected behavior. client-go was invoking kubectl because from client-go's perspective it's just exec provider from kubeconfig

users:
- name: kubernetes-admin
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: kubectl
      args:

@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 28, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 27, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants