Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finding local kubectl version either requires timeout or deprecated option (?) #1216

Open
sftim opened this issue May 17, 2022 · 19 comments
Open
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@sftim
Copy link
Contributor

sftim commented May 17, 2022

What happened:

  • I wanted to find out the version of kubectl I have installed, even though I have no selected context.
  • I wanted to find out the version of kubectl I have installed, even though I have a current context that is offline.

Sample console sessions:

laptop:~$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.0
Kustomize Version: v4.5.4
Unable to connect to the server: dial tcp 203.0.113.42:443: connect: no route to host
$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.0
Kustomize Version: v4.5.4
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ kubectl version --client                               
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
$ kubectl version 
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
The connection to the server localhost:8080 was refused - did you specify the right host or port?

What you expected to happen:
Something like:

  • kubectl version shows the client version only.
  • kubectl version --client does not show any warning.

For example:

$ kubectl version
Client Version: v1.24.0
Kustomize Version: v4.5.4
$ kubectl version --client
Client Version: v1.24.0
Kustomize Version: v4.5.4

It's OK to have kubectl version --include-cluster-info or kubectl version --remote show the remote version too.

I would also be OK if kubectl version only checked the remote version when a current context is explicitly set (no fallback to an implicit server URL).

How to reproduce it (as minimally and precisely as possible):

  1. Visit https://kubernetes.io/docs/tasks/tools/#kubectl
  2. Install kubectl as instructed and, on a vanilla system, check the kubectl version.

Anything else we need to know?:
As a project, we need a way for people who are setting up a brand new kubectl to confirm that they have a working and current kubectl.

It's very helpful if this confirmation step doesn't require explaining that there is a warning: it's good practice not to have people become accustomed to seeing and then skipping warning messages.

Environment:

  • Kubernetes client and server versions (use kubectl version): v1.24.0
  • Cloud provider or hardware configuration: n/a
  • OS (e.g: cat /etc/os-release): Linux, but relevant to all OSs
@sftim sftim added the kind/bug Categorizes issue or PR as related to a bug. label May 17, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label May 17, 2022
@sftim
Copy link
Contributor Author

sftim commented May 17, 2022

Prompted by kubernetes/website#33764

@knight42
Copy link
Member

knight42 commented May 18, 2022

Is it reasonable to get the client version by running kubectl version -oyaml --client? The example output is:

clientVersion:
  buildDate: "2022-05-03T13:36:49Z"
  compiler: gc
  gitCommit: 4ce5a8954017644c5420bae81d72b09b735c21f0
  gitTreeState: clean
  gitVersion: v1.24.0
  goVersion: go1.18.1
  major: "1"
  minor: "24"
  platform: darwin/amd64
kustomizeVersion: v4.5.4

We could extract the kubectl version like the following:

$ kubectl version -oyaml --client|awk '/gitVersion/{print $2;}'
v1.24.0

@sftim
Copy link
Contributor Author

sftim commented May 18, 2022

I believed that kubectl version -oyaml --client is deprecated because --client is deprecated. However, if --client is only deprecated for the default output format then these both work:

kubectl version --client -ojson | jq -r .clientVersion.gitVersion 
kubectl version -oyaml --client|awk '/gitVersion/{print $2;}'

A shame to require external tools (jq, awk) to check the install.

kubectl version --client --short would work OK apart from the deprecation warning.

@sftim sftim changed the title Finding local kubectl version either requires timeout or deprecated option Finding local kubectl version either requires timeout or deprecated option (?) May 18, 2022
@sftim
Copy link
Contributor Author

sftim commented May 18, 2022

Actually, we could mention that both

  • kubectl version --client -ojson | jq -r .clientVersion.gitVersion
  • kubectl version -oyaml --client|awk '/gitVersion/{print $2;}'

are viable options, and let users choose.

@sftim
Copy link
Contributor Author

sftim commented May 20, 2022

It sounds like this is expected behavior, and that we should document the warning as something for readers to be aware of and ignore if they are deploying v1.24 kubectl.

@knight42
Copy link
Member

I believed that kubectl version -oyaml --client is deprecated because --client is deprecated.

Nope, the deprecated flag is actuall --short as the output of kubectl version --short would become the default in the future, the flag --client is not deprecated.

we should document the warning as something for readers to be aware of and ignore if they are deploying v1.24 kubectl.

Yeah that would be great.

@sftim
Copy link
Contributor Author

sftim commented May 21, 2022

the flag --client is not deprecated

$ kubectl version --client                               
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4

I think people will see that and think that kubectl version --client is deprecated: they run it, and they see a deprecation warning.

@knight42
Copy link
Member

I think people will see that and think that kubectl version --client is deprecated

Ah that's because --short is set by default and is deprecated now, so users would see this warning.

@soltysh
Copy link
Contributor

soltysh commented May 25, 2022

As discussed on slack, and earlier today during bug scrub, we need to:

  1. document that the deprecation warning is expected
  2. document for first timers not having cluster to use --client.

@sftim I looked into when you hit 30s timeout, that's when you have invalid kubeconfig pointing to a valid host but wrong port, in all other cases (missing host, missing kubeconfig, invalid host) you should get instant response.

/triage accepted
/help-wanted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 25, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 22, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sftim
Copy link
Contributor Author

sftim commented Oct 24, 2022

/reopen
This issue was accepted

/remove-lifecycle rotten

@k8s-ci-robot
Copy link
Contributor

@sftim: Reopened this issue.

In response to this:

/reopen
This issue was accepted

/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Oct 24, 2022
@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2023
@sftim
Copy link
Contributor Author

sftim commented Feb 13, 2023

Also see kubernetes/website#39431

@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Feb 13, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

5 participants