Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

role missing in kubectl get command line completion #357

Closed
m4r10k opened this issue Mar 18, 2018 · 11 comments
Closed

role missing in kubectl get command line completion #357

m4r10k opened this issue Mar 18, 2018 · 11 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.

Comments

@m4r10k
Copy link

m4r10k commented Mar 18, 2018

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): completion

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: on-prem
  • OS (e.g. from /etc/os-release): Ubuntu 16.04.4
  • Kernel (e.g. uname -a):
  • Install tools: kubeadm

What happened:
The command line completion of kubectl get does not show role. It shows only rolebinding.

 kubectl get 
certificatesigningrequest  endpoints                  persistentvolume           rolebinding
clusterrolebinding         event                      persistentvolumeclaim      secret
componentstatus            horizontalpodautoscaler    pod                        service
configmap                  ingress                    poddisruptionbudget        serviceaccount
controllerrevision         job                        podsecuritypolicy          statefulset
cronjob                    namespace                  podtemplate                status
daemonset                  networkpolicy              replicaset                 storageclass
deployment                 node                       replicationcontroller      

What you expected to happen:
kubectl get command line completion should also show role because getting a role is possible:

kubectl get role pod-reader-default-role
NAME                      AGE
pod-reader-default-role   13m

How to reproduce it (as minimally and precisely as possible):
kubectl get [tab,tab]

@mengqiy mengqiy added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Mar 21, 2018
@mengqiy
Copy link
Member

mengqiy commented Mar 21, 2018

cc: @superbrothers

@superbrothers
Copy link
Member

I will look this ;)

@superbrothers
Copy link
Member

The reason why "clusterrole" and "role" are not completed is that they are not included in handled resources of human readable printer.

Fundamentally, we should replace validArgList with a real implementation from the discovery service, but currently it is difficult. (Using kubectl api-resources, it might be able to accomplish)

So we have the following two choices:

  1. Define temporary print handlers for "clusterrole" and "role": superbrothers/kubernetes@e29340c
  2. Handle "clusterrole" and "role" as default valid args: superbrothers/kubernetes@eacd003

I feel "2" is better because of simple.

@mengqiy What do you think?

@mengqiy
Copy link
Member

mengqiy commented Mar 29, 2018

2 is simpler, but I feel it treats role and clusterrole differently.

they are not included in handled resources of human readable printer.

@juanvallejo @soltysh Any idea how we can fix this?

@juanvallejo
Copy link

2 is simpler, but I feel it treats role and clusterrole differently.

I agree this step is the most straightforward out of the two, however I'm still thinking of a cleaner implementation.

With regards to replacing ValidArgList with a discovery-based alternative, I would not be in support of that, at least with the way it is currently used. Since the ValidArgList helper is called from the NewCmd... functions for a number of commands, talking to the server here would mean that we end up making N number of requests for every other command that calls this helper. If a server connection is slow, or the server does not respond, any one command could hang for an indefinite amount of time.

If we could find a way to patch completions so that they use a discovery-based list of resources, without having to touch the validArgs cobra field, I'd be in favor of that. One thing that comes to mind at the moment could be to introduce a new validArgsFunc upstream. That way, we can still use discovery logic there, but it only gets invoked once that particular command runs.

@soltysh
Copy link
Contributor

soltysh commented Mar 30, 2018

@juanvallejo @soltysh Any idea how we can fix this?

I've seen a PR fixing that bit of code with data from discovery. Let's revisit this issue after kubernetes/kubernetes#61691 is merged.

@m4r10k
Copy link
Author

m4r10k commented Mar 30, 2018

@soltysh yes, as kubernetes/kubernetes#61691 looks like as it would fill the gap

@superbrothers
Copy link
Member

superbrothers commented Apr 4, 2018

Looks like that the way of kubernetes/kubernetes#61691 is not better for this issue.

Since ValidArgList is used when generating a completion script with kubectl completion bash/zsh, the resource list of the connected cluster at that time will be cached in the generated script.

# the generated completion script with `kubectl completion bash`
    must_have_one_flag=()
    must_have_one_noun=()
    must_have_one_noun+=("certificatesigningrequest")
    must_have_one_noun+=("clusterrolebinding")
    must_have_one_noun+=("componentstatus")
    must_have_one_noun+=("configmap")
    must_have_one_noun+=("controllerrevision")
    must_have_one_noun+=("cronjob")
    must_have_one_noun+=("daemonset")
    must_have_one_noun+=("deployment")
    must_have_one_noun+=("endpoints")
    must_have_one_noun+=("event")
    must_have_one_noun+=("horizontalpodautoscaler")
    must_have_one_noun+=("ingress")
    must_have_one_noun+=("job")
    must_have_one_noun+=("namespace")

This is not appropriate when using multiple clusters. In this case, we need to obtain the resource list dynamically from the discovery service every time we complete it.

One idea is to use kubectl api-resources command as I commented (#357 (comment)).

commit b8f1b5e3aeb8b200ba3ab1afa300ebd322557592
Author: Kazuki Suda <kazuki.suda@gmail.com>
Date:   Sat Mar 31 12:09:53 2018 +0900

    Complete resource types from the discovery service

diff --git a/pkg/kubectl/cmd/cmd.go b/pkg/kubectl/cmd/cmd.go
index c6fdab1..e6d542d 100644
--- a/pkg/kubectl/cmd/cmd.go
+++ b/pkg/kubectl/cmd/cmd.go
@@ -106,7 +106,14 @@ __kubectl_parse_get()
 __kubectl_get_resource()
 {
     if [[ ${#nouns[@]} -eq 0 ]]; then
-        return 1
+        local kubectl_out
+        if kubectl_out=$(kubectl api-resources $(__kubectl_override_flags) --no-headers 2>/dev/null | awk '{print $1}'); then
+            COMPREPLY=( $( compgen -W "${kubectl_out[*]}" -- "$cur" ) )
+        fi
+        if [[ ${#COMPREPLY[@]} -eq 0 ]]; then
+            # TODO: To complete shortnames
+        fi
+        return 0
     fi
     __kubectl_parse_get "${nouns[${#nouns[@]} -1]}"
 }
diff --git a/pkg/kubectl/cmd/resource/get.go b/pkg/kubectl/cmd/resource/get.go
index e4b4256..2457ff9 100644
--- a/pkg/kubectl/cmd/resource/get.go
+++ b/pkg/kubectl/cmd/resource/get.go
@@ -138,7 +138,6 @@ func NewGetOptions(out io.Writer, errOut io.Writer) *GetOptions {
 // retrieves one or more resources from a server.
 func NewCmdGet(f cmdutil.Factory, out io.Writer, errOut io.Writer) *cobra.Command {
        options := NewGetOptions(out, errOut)
-       validArgs := cmdutil.ValidArgList(f)

        cmd := &cobra.Command{
                Use: "get [(-o|--output=)json|yaml|wide|custom-columns=...|custom-columns-file=...|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...] (TYPE [NAME | -l label] | TYPE/NAME ...) [flags]",
@@ -152,8 +151,6 @@ func NewCmdGet(f cmdutil.Factory, out io.Writer, errOut io.Writer) *cobra.Comman
                        cmdutil.CheckErr(options.Run(f, cmd, args))
                },
                SuggestFor: []string{"list", "ps"},
-               ValidArgs:  validArgs,
-               ArgAliases: kubectl.ResourceAliases(validArgs),
        }

        cmd.Flags().StringVar(&options.Raw, "raw", options.Raw, "Raw URI to request from the server.  Uses the transport specified by the kubeconfig file.")

This works well.

$ kubectl get <tab>
apiservices                  configmaps                  endpoints                   limitranges                     persistentvolumeclaims   podtemplates             roles                      statefulsets
bindings                     controllerrevisions         events                      localsubjectaccessreviews       persistentvolumes        priorityclasses          secrets                    storageclasses
certificatesigningrequests   cronjobs                    horizontalpodautoscalers    mutatingwebhookconfigurations   poddisruptionbudgets     replicasets              selfsubjectaccessreviews   subjectaccessreviews
clusterrolebindings          customresourcedefinitions   ingresses                   namespaces                      podpresets               replicationcontrollers   selfsubjectrulesreviews    tokenreviews
clusterroles                 daemonsets                  initializerconfigurations   networkpolicies                 pods                     resourcequotas           serviceaccounts            validatingwebhookconfigurations
componentstatuses            deployments                 jobs                        nodes                           podsecuritypolicies      rolebindings             services                   volumeattachments

However there is a problem that kubectl api-resources is too slow. In this case, it seems better to use the cached discovery info.

$ _output/bin/kubectl api-resources -v=6 >/dev/null
I0331 11:50:19.951044   31048 loader.go:359] Config loaded from file /home/ksuda/.kube/config
I0331 11:50:19.960795   31048 round_trippers.go:405] GET https://192.168.99.100:8443/api 200 OK in 9 milliseconds
I0331 11:50:19.970997   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis 200 OK in 1 milliseconds
I0331 11:50:19.988311   31048 round_trippers.go:405] GET https://192.168.99.100:8443/api/v1 200 OK in 6 milliseconds
I0331 11:50:20.008371   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/apiregistration.k8s.io/v1beta1 200 OK in 3 milliseconds
I0331 11:50:20.024099   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/extensions/v1beta1 200 OK in 5 milliseconds
I0331 11:50:20.034001   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/apps/v1 200 OK in 3 milliseconds
I0331 11:50:20.048906   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/apps/v1beta2 200 OK in 4 milliseconds
I0331 11:50:20.064560   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/apps/v1beta1 200 OK in 4 milliseconds
I0331 11:50:20.081393   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/events.k8s.io/v1beta1 200 OK in 4 milliseconds
I0331 11:50:20.096061   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/authentication.k8s.io/v1 200 OK in 3 milliseconds
I0331 11:50:20.112100   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/authentication.k8s.io/v1beta1 200 OK in 4 milliseconds
I0331 11:50:20.126972   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/authorization.k8s.io/v1 200 OK in 4 milliseconds
I0331 11:50:20.143286   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/authorization.k8s.io/v1beta1 200 OK in 4 milliseconds
I0331 11:50:20.157694   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/autoscaling/v1 200 OK in 3 milliseconds
I0331 11:50:20.170916   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/autoscaling/v2beta1 200 OK in 2 milliseconds
I0331 11:50:20.183005   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/batch/v1 200 OK in 2 milliseconds
I0331 11:50:20.196399   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/batch/v1beta1 200 OK in 2 milliseconds
I0331 11:50:20.208700   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/batch/v2alpha1 200 OK in 2 milliseconds
I0331 11:50:20.220668   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/certificates.k8s.io/v1beta1 200 OK in 2 milliseconds
I0331 11:50:20.233315   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/networking.k8s.io/v1 200 OK in 2 milliseconds
I0331 11:50:20.246039   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/policy/v1beta1 200 OK in 2 milliseconds
I0331 11:50:20.258604   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1 200 OK in 2 milliseconds
I0331 11:50:20.270608   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1 200 OK in 2 milliseconds
I0331 11:50:20.287634   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1alpha1 200 OK in 6 milliseconds
I0331 11:50:20.302241   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/settings.k8s.io/v1alpha1 200 OK in 3 milliseconds
I0331 11:50:20.317905   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/storage.k8s.io/v1 200 OK in 4 milliseconds
I0331 11:50:20.333327   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/storage.k8s.io/v1beta1 200 OK in 4 milliseconds
I0331 11:50:20.351488   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/storage.k8s.io/v1alpha1 200 OK in 3 milliseconds
I0331 11:50:20.363225   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/admissionregistration.k8s.io/v1beta1 200 OK in 1 milliseconds
I0331 11:50:20.373821   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/admissionregistration.k8s.io/v1alpha1 200 OK in 1 milliseconds
I0331 11:50:20.384692   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/apiextensions.k8s.io/v1beta1 200 OK in 1 milliseconds
I0331 11:50:20.394661   31048 round_trippers.go:405] GET https://192.168.99.100:8443/apis/scheduling.k8s.io/v1alpha1 200 OK in 1 milliseconds

https://github.com/superbrothers/kubernetes/tree/kubectl-357-3 is an implementation of this idea. You can try this with the following steps:

$ git clone -b kubectl-357-3 https://github.com/superbrothers/kubernetes.git && cd kubernetes
$ make kubectl
$ export PATH=${PWD}/_output/bin:$PATH
$ source <(kubectl completion bash)
$ kubectl get <tab>

@liggitt
Copy link
Member

liggitt commented May 4, 2018

kubernetes/kubernetes#63421 implements lookup at completion-time using api-resources (and adds caching support to make it performant)

@superbrothers
Copy link
Member

@liggitt Great work!

@m4r10k
Copy link
Author

m4r10k commented May 4, 2018

@liggitt Thanks!

k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue May 4, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Cache preferred resources, use in kubectl resource name autocomplete

Fixes #63145
Fixes kubernetes/kubectl#357
Alternative to #61928 

* starts to unify preferred resource logic on top of ServerGroups()/ServerResourcesForGroupVersion() methods
* allows indicating a cached list of resources is acceptable when calling `kubectl api-resources` (default is still to rediscover)
* uses `kubectl api-resources` in bash completion

```sh
$ kubectl get [TAB][TAB]
apiservices.apiregistration.k8s.io                            networkpolicies.extensions
certificatesigningrequests.certificates.k8s.io                networkpolicies.networking.k8s.io
clusterrolebindings.rbac.authorization.k8s.io                 nodes
clusterroles.rbac.authorization.k8s.io                        persistentvolumeclaims
componentstatuses                                             persistentvolumes
configmaps                                                    poddisruptionbudgets.policy
controllerrevisions.apps                                      pods
cronjobs.batch                                                podsecuritypolicies.extensions
customresourcedefinitions.apiextensions.k8s.io                podsecuritypolicies.policy
daemonsets.apps                                               podtemplates
daemonsets.extensions                                         replicasets.apps
deployments.apps                                              replicasets.extensions
deployments.extensions                                        replicationcontrollers
endpoints                                                     resourcequotas
events                                                        rolebindings.rbac.authorization.k8s.io
events.events.k8s.io                                          roles.rbac.authorization.k8s.io
horizontalpodautoscalers.autoscaling                          secrets
ingresses.extensions                                          serviceaccounts
initializerconfigurations.admissionregistration.k8s.io        services
jobs.batch                                                    statefulsets.apps
limitranges                                                   storageclasses.storage.k8s.io
mutatingwebhookconfigurations.admissionregistration.k8s.io    validatingwebhookconfigurations.admissionregistration.k8s.io
namespaces                                                    volumeattachments.storage.k8s.io
```


```release-note
NONE
```
k8s-publishing-bot added a commit to kubernetes/client-go that referenced this issue May 4, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Cache preferred resources, use in kubectl resource name autocomplete

Fixes #63145
Fixes kubernetes/kubectl#357
Alternative to #61928

* starts to unify preferred resource logic on top of ServerGroups()/ServerResourcesForGroupVersion() methods
* allows indicating a cached list of resources is acceptable when calling `kubectl api-resources` (default is still to rediscover)
* uses `kubectl api-resources` in bash completion

```sh
$ kubectl get [TAB][TAB]
apiservices.apiregistration.k8s.io                            networkpolicies.extensions
certificatesigningrequests.certificates.k8s.io                networkpolicies.networking.k8s.io
clusterrolebindings.rbac.authorization.k8s.io                 nodes
clusterroles.rbac.authorization.k8s.io                        persistentvolumeclaims
componentstatuses                                             persistentvolumes
configmaps                                                    poddisruptionbudgets.policy
controllerrevisions.apps                                      pods
cronjobs.batch                                                podsecuritypolicies.extensions
customresourcedefinitions.apiextensions.k8s.io                podsecuritypolicies.policy
daemonsets.apps                                               podtemplates
daemonsets.extensions                                         replicasets.apps
deployments.apps                                              replicasets.extensions
deployments.extensions                                        replicationcontrollers
endpoints                                                     resourcequotas
events                                                        rolebindings.rbac.authorization.k8s.io
events.events.k8s.io                                          roles.rbac.authorization.k8s.io
horizontalpodautoscalers.autoscaling                          secrets
ingresses.extensions                                          serviceaccounts
initializerconfigurations.admissionregistration.k8s.io        services
jobs.batch                                                    statefulsets.apps
limitranges                                                   storageclasses.storage.k8s.io
mutatingwebhookconfigurations.admissionregistration.k8s.io    validatingwebhookconfigurations.admissionregistration.k8s.io
namespaces                                                    volumeattachments.storage.k8s.io
```

```release-note
NONE
```

Kubernetes-commit: 068b7befa926d376634c79cbba3d210c1dc596fe
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
Projects
None yet
Development

No branches or pull requests

6 participants