New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

`kubectl get all` lists some resource types twice #55720

Closed
atombender opened this Issue Nov 14, 2017 · 13 comments

Comments

Projects
None yet
9 participants
@atombender
Contributor

atombender commented Nov 14, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

kubectl get all prints the replica set section twice.

What you expected to happen:

It should not print dupes.

How to reproduce it (as minimally and precisely as possible):

$ kubectl get all -l app=myapp
NAME                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/myapp        4         4         4            4           169d

NAME                                  DESIRED   CURRENT   READY     AGE
rs/myapp-1031441472        0         0         0         159d
rs/myapp-1066912010        0         0         0         159d
rs/myapp-1627954893        0         0         0         140d
rs/myapp-1660980575        0         0         0         154d
rs/myapp-2403377231        0         0         0         137d
rs/myapp-2479791910        4         4         4         53d
rs/myapp-3454720083        0         0         0         145d
rs/myapp-3461390017        0         0         0         139d
rs/myapp-3554658816        0         0         0         162d
rs/myapp-3624978688        0         0         0         169d
rs/myapp-378637676         0         0         0         154d
rs/myapp-4151746477        0         0         0         139d
rs/myapp-702975759         0         0         0         159d

NAME                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/myapp        4         4         4            4           169d

NAME                                  DESIRED   CURRENT   READY     AGE
rs/myapp-1031441472        0         0         0         159d
rs/myapp-1066912010        0         0         0         159d
rs/myapp-1627954893        0         0         0         140d
rs/myapp-1660980575        0         0         0         154d
rs/myapp-2403377231        0         0         0         137d
rs/myapp-2479791910        4         4         4         53d
rs/myapp-3454720083        0         0         0         145d
rs/myapp-3461390017        0         0         0         139d
rs/myapp-3554658816        0         0         0         162d
rs/myapp-3624978688        0         0         0         169d
rs/myapp-378637676         0         0         0         154d
rs/myapp-4151746477        0         0         0         139d
rs/myapp-702975759         0         0         0         159d

NAME                                        READY     STATUS    RESTARTS   AGE
po/myapp-2479791910-9w7zx        1/1       Running   0          14d
po/myapp-2479791910-k2p44        1/1       Running   0          14d
po/myapp-2479791910-nb5lj        1/1       Running   0          14d
po/myapp-2479791910-wjmcn        1/1       Running   1          14d

NAME                        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
svc/myapp        NodePort   10.7.245.217   <none>        80:32046/TCP   169d

Correct:

$ kubectl get rs,pod -l app=myapp
NAME                                  DESIRED   CURRENT   READY     AGE
rs/myapp-1031441472        0         0         0         159d
rs/myapp-1066912010        0         0         0         159d
rs/myapp-1627954893        0         0         0         140d
rs/myapp-1660980575        0         0         0         154d
rs/myapp-2403377231        0         0         0         137d
rs/myapp-2479791910        4         4         4         53d
rs/myapp-3454720083        0         0         0         145d
rs/myapp-3461390017        0         0         0         139d
rs/myapp-3554658816        0         0         0         162d
rs/myapp-3624978688        0         0         0         169d
rs/myapp-378637676         0         0         0         154d
rs/myapp-4151746477        0         0         0         139d
rs/myapp-702975759         0         0         0         159d

NAME                                        READY     STATUS    RESTARTS   AGE
po/myapp-2479791910-9w7zx        1/1       Running   0          14d
po/myapp-2479791910-k2p44        1/1       Running   0          14d
po/myapp-2479791910-nb5lj        1/1       Running   0          14d
po/myapp-2479791910-wjmcn        1/1       Running   1          14d

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): 1.8.1-gke.1
  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@nikhiljindal

This comment has been minimized.

Show comment
Hide comment
@nikhiljindal

nikhiljindal Nov 14, 2017

Member

I see deployments were listed twice as well.

@kubernetes/sig-cli-bugs

Member

nikhiljindal commented Nov 14, 2017

I see deployments were listed twice as well.

@kubernetes/sig-cli-bugs

@atombender atombender changed the title from `kubectl get all` lists replica sets twice to `kubectl get all` lists some resource types twice Nov 14, 2017

@lichuqiang

This comment has been minimized.

Show comment
Hide comment
@lichuqiang

lichuqiang Nov 15, 2017

Member

When exploring for category of "all" (https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/resource/categories.go#L53),
it loaded the resources in discoveryClient, instead of that in legacyUserResources:

var legacyUserResources = []schema.GroupResource{
	{Group: "", Resource: "pods"},
	{Group: "", Resource: "replicationcontrollers"},
	{Group: "", Resource: "services"},
	{Group: "apps", Resource: "statefulsets"},
	{Group: "autoscaling", Resource: "horizontalpodautoscalers"},
	{Group: "batch", Resource: "jobs"},
	{Group: "batch", Resource: "cronjobs"},
	{Group: "extensions", Resource: "daemonsets"},
	{Group: "extensions", Resource: "deployments"},
	{Group: "extensions", Resource: "replicasets"},
}

As a result, resources exist in mylti-groups would got queried repeatly:

I1115 13:34:20.016650  111340 categories.go:68] ------------category is : [all], Group: extensions, resource: daemonsets, version: v1beta1
I1115 13:34:20.016680  111340 categories.go:68] ------------category is : [all], Group: extensions, resource: deployments, version: v1beta1
I1115 13:34:20.016693  111340 categories.go:68] ------------category is : [all], Group: extensions, resource: replicasets, version: v1beta1
I1115 13:34:20.016705  111340 categories.go:68] ------------category is : [all], Group: apps, resource: daemonsets, version: v1
I1115 13:34:20.016715  111340 categories.go:68] ------------category is : [all], Group: apps, resource: deployments, version: v1beta1
I1115 13:34:20.016726  111340 categories.go:68] ------------category is : [all], Group: apps, resource: statefulsets, version: v1beta1
I1115 13:34:20.016737  111340 categories.go:68] ------------category is : [all], Group: apps, resource: daemonsets, version: v1beta2
I1115 13:34:20.016747  111340 categories.go:68] ------------category is : [all], Group: apps, resource: deployments, version: v1beta2
I1115 13:34:20.016757  111340 categories.go:68] ------------category is : [all], Group: apps, resource: replicasets, version: v1beta2
I1115 13:34:20.016768  111340 categories.go:68] ------------category is : [all], Group: apps, resource: statefulsets, version: v1beta2
I1115 13:34:20.016779  111340 categories.go:68] ------------category is : [all], Group: autoscaling, resource: horizontalpodautoscalers, version: v1
I1115 13:34:20.016789  111340 categories.go:68] ------------category is : [all], Group: autoscaling, resource: horizontalpodautoscalers, version: v2beta1
I1115 13:34:20.016800  111340 categories.go:68] ------------category is : [all], Group: batch, resource: jobs, version: v1
I1115 13:34:20.016810  111340 categories.go:68] ------------category is : [all], Group: batch, resource: cronjobs, version: v1beta1
I1115 13:34:20.016822  111340 categories.go:68] ------------category is : [all], Group: , resource: pods, version: v1
I1115 13:34:20.016833  111340 categories.go:68] ------------category is : [all], Group: , resource: replicationcontrollers, version: v1
I1115 13:34:20.016843  111340 categories.go:68] ------------category is : [all], Group: , resource: services, version: v1

Take resource rs for example: one in group extensions, one in group apps

I meat to work on the fix, but as CategoryExpander is thought to be a public interface, and I don't have enough background knowledge, so I could not decide what is the proper way to modify the point.

@zjj2wry Do you have any suggestions?

Member

lichuqiang commented Nov 15, 2017

When exploring for category of "all" (https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/resource/categories.go#L53),
it loaded the resources in discoveryClient, instead of that in legacyUserResources:

var legacyUserResources = []schema.GroupResource{
	{Group: "", Resource: "pods"},
	{Group: "", Resource: "replicationcontrollers"},
	{Group: "", Resource: "services"},
	{Group: "apps", Resource: "statefulsets"},
	{Group: "autoscaling", Resource: "horizontalpodautoscalers"},
	{Group: "batch", Resource: "jobs"},
	{Group: "batch", Resource: "cronjobs"},
	{Group: "extensions", Resource: "daemonsets"},
	{Group: "extensions", Resource: "deployments"},
	{Group: "extensions", Resource: "replicasets"},
}

As a result, resources exist in mylti-groups would got queried repeatly:

I1115 13:34:20.016650  111340 categories.go:68] ------------category is : [all], Group: extensions, resource: daemonsets, version: v1beta1
I1115 13:34:20.016680  111340 categories.go:68] ------------category is : [all], Group: extensions, resource: deployments, version: v1beta1
I1115 13:34:20.016693  111340 categories.go:68] ------------category is : [all], Group: extensions, resource: replicasets, version: v1beta1
I1115 13:34:20.016705  111340 categories.go:68] ------------category is : [all], Group: apps, resource: daemonsets, version: v1
I1115 13:34:20.016715  111340 categories.go:68] ------------category is : [all], Group: apps, resource: deployments, version: v1beta1
I1115 13:34:20.016726  111340 categories.go:68] ------------category is : [all], Group: apps, resource: statefulsets, version: v1beta1
I1115 13:34:20.016737  111340 categories.go:68] ------------category is : [all], Group: apps, resource: daemonsets, version: v1beta2
I1115 13:34:20.016747  111340 categories.go:68] ------------category is : [all], Group: apps, resource: deployments, version: v1beta2
I1115 13:34:20.016757  111340 categories.go:68] ------------category is : [all], Group: apps, resource: replicasets, version: v1beta2
I1115 13:34:20.016768  111340 categories.go:68] ------------category is : [all], Group: apps, resource: statefulsets, version: v1beta2
I1115 13:34:20.016779  111340 categories.go:68] ------------category is : [all], Group: autoscaling, resource: horizontalpodautoscalers, version: v1
I1115 13:34:20.016789  111340 categories.go:68] ------------category is : [all], Group: autoscaling, resource: horizontalpodautoscalers, version: v2beta1
I1115 13:34:20.016800  111340 categories.go:68] ------------category is : [all], Group: batch, resource: jobs, version: v1
I1115 13:34:20.016810  111340 categories.go:68] ------------category is : [all], Group: batch, resource: cronjobs, version: v1beta1
I1115 13:34:20.016822  111340 categories.go:68] ------------category is : [all], Group: , resource: pods, version: v1
I1115 13:34:20.016833  111340 categories.go:68] ------------category is : [all], Group: , resource: replicationcontrollers, version: v1
I1115 13:34:20.016843  111340 categories.go:68] ------------category is : [all], Group: , resource: services, version: v1

Take resource rs for example: one in group extensions, one in group apps

I meat to work on the fix, but as CategoryExpander is thought to be a public interface, and I don't have enough background knowledge, so I could not decide what is the proper way to modify the point.

@zjj2wry Do you have any suggestions?

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Nov 16, 2017

Member

the current behavior is correct, given the discovery information published. replicasets in the apps group are not inherently related to replicasets in the extensions group as far as the generic aliasing behavior is concerned.

if we don't want both to appear, I'd pursue changing how the server exposes aliases for those types so that only on group or the other lists the all alias for itself

Member

liggitt commented Nov 16, 2017

the current behavior is correct, given the discovery information published. replicasets in the apps group are not inherently related to replicasets in the extensions group as far as the generic aliasing behavior is concerned.

if we don't want both to appear, I'd pursue changing how the server exposes aliases for those types so that only on group or the other lists the all alias for itself

@lichuqiang

This comment has been minimized.

Show comment
Hide comment
@lichuqiang

lichuqiang Nov 16, 2017

Member

In fact, as I'm not familiar with api-machinery, I wonder why certain resources exists in both groups.
How we expect user to use them when installing resources?
Would any of them got removed in the near future?

Member

lichuqiang commented Nov 16, 2017

In fact, as I'm not familiar with api-machinery, I wonder why certain resources exists in both groups.
How we expect user to use them when installing resources?
Would any of them got removed in the near future?

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Nov 16, 2017

Member

In fact, as I'm not familiar with api-machinery, I wonder why certain resources exists in both groups.

Several began life in the extensions group and are being migrated to their final homes in the apps group. There must be a period of overlap where they can be created/accessed via either group to allow for seamless migration, then they will be removed from the extensions group.

Member

liggitt commented Nov 16, 2017

In fact, as I'm not familiar with api-machinery, I wonder why certain resources exists in both groups.

Several began life in the extensions group and are being migrated to their final homes in the apps group. There must be a period of overlap where they can be created/accessed via either group to allow for seamless migration, then they will be removed from the extensions group.

@lichuqiang

This comment has been minimized.

Show comment
Hide comment
@lichuqiang

lichuqiang Nov 16, 2017

Member

Oh, I see. Thanks for explanation. So I think the problem is not an issue of cli to some extent

Member

lichuqiang commented Nov 16, 2017

Oh, I see. Thanks for explanation. So I think the problem is not an issue of cli to some extent

@tengqm

This comment has been minimized.

Show comment
Hide comment
@tengqm

tengqm Nov 16, 2017

Contributor

Maybe the API docs ( https://kubernetes.io/docs/concepts/overview/kubernetes-api/ ) could be of some help. There are some links on that page if you want to dig more. :)

Contributor

tengqm commented Nov 16, 2017

Maybe the API docs ( https://kubernetes.io/docs/concepts/overview/kubernetes-api/ ) could be of some help. There are some links on that page if you want to dig more. :)

@dixudx

This comment has been minimized.

Show comment
Hide comment
@dixudx

dixudx Nov 20, 2017

Member

/close

Since this is not bug. @atombender You can reopen it if needed.

Member

dixudx commented Nov 20, 2017

/close

Since this is not bug. @atombender You can reopen it if needed.

@atombender

This comment has been minimized.

Show comment
Hide comment
@atombender

atombender Nov 20, 2017

Contributor

@dixudx: It might not be a bug on the backend, but it's certainly inconsistent behaviour: kubectl get all should be equivalent to kubectl get pod,rs,service,deploy,[... rest of types.], but it isn't. kubectl get all isn't a strict superset.

$ kubectl get all | grep "rs/" | wc -l
1158
$ kubectl get rs | wc -l
580

That's unexpected. It may be strictly correct in terms of the API protocol, but kubectl is a UI that shouldn't leak internal concerns that only confuse. (It also confuses tools that use kubectl programmatically to interact with a cluster, which now has to de-dupe the output of kubectl get all, for no particularly good reason.)

Contributor

atombender commented Nov 20, 2017

@dixudx: It might not be a bug on the backend, but it's certainly inconsistent behaviour: kubectl get all should be equivalent to kubectl get pod,rs,service,deploy,[... rest of types.], but it isn't. kubectl get all isn't a strict superset.

$ kubectl get all | grep "rs/" | wc -l
1158
$ kubectl get rs | wc -l
580

That's unexpected. It may be strictly correct in terms of the API protocol, but kubectl is a UI that shouldn't leak internal concerns that only confuse. (It also confuses tools that use kubectl programmatically to interact with a cluster, which now has to de-dupe the output of kubectl get all, for no particularly good reason.)

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Nov 20, 2017

Member

kubectl get all should be equivalent to kubectl get pod,rs,service,deploy,[... rest of types.], but it isn't

yes, it is

it is equivalent to kubectl get [...all types listed in discovery as belonging in the 'all' alias]

that expands to kubectl get pods.v1.,replicasets.v1.apps,replicasets.v1beta1.extensions,...

but kubectl is a UI that shouldn't leak internal concerns

I agree. It is because kubectl is just a UI presenting the information surfaced by the API that we have this behavior. Kubectl is basing its behavior on discovery documents published by the API. Handling those generically means it should not bake in magic knowledge that deployments.v1.apps and deployments.v1beta1.extensions are the same resource.

Member

liggitt commented Nov 20, 2017

kubectl get all should be equivalent to kubectl get pod,rs,service,deploy,[... rest of types.], but it isn't

yes, it is

it is equivalent to kubectl get [...all types listed in discovery as belonging in the 'all' alias]

that expands to kubectl get pods.v1.,replicasets.v1.apps,replicasets.v1beta1.extensions,...

but kubectl is a UI that shouldn't leak internal concerns

I agree. It is because kubectl is just a UI presenting the information surfaced by the API that we have this behavior. Kubectl is basing its behavior on discovery documents published by the API. Handling those generically means it should not bake in magic knowledge that deployments.v1.apps and deployments.v1beta1.extensions are the same resource.

@atombender

This comment has been minimized.

Show comment
Hide comment
@atombender

atombender Nov 20, 2017

Contributor

@liggitt I didn't know kubectl supported that. That's not documented in the --help, and at least some of the types are not working for me:

$ kubectl get deployments.v1beta1.extensions | wc -l 
63
$ kubectl get pods.v1 | wc -l
the server doesn't have a resource type "pods" in group "v1"
0

The help text implies that all is an alias for the resource types listed, which does not show API namespace names (e.g. it's just pods, not pods.v1).

Contributor

atombender commented Nov 20, 2017

@liggitt I didn't know kubectl supported that. That's not documented in the --help, and at least some of the types are not working for me:

$ kubectl get deployments.v1beta1.extensions | wc -l 
63
$ kubectl get pods.v1 | wc -l
the server doesn't have a resource type "pods" in group "v1"
0

The help text implies that all is an alias for the resource types listed, which does not show API namespace names (e.g. it's just pods, not pods.v1).

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Nov 20, 2017

Member

sorry, pods.v1. (note trailing dot... the form is <resource>.<version>.<group>, and pods are in the legacy "" group)

Member

liggitt commented Nov 20, 2017

sorry, pods.v1. (note trailing dot... the form is <resource>.<version>.<group>, and pods are in the legacy "" group)

@anorth2

This comment has been minimized.

Show comment
Hide comment
@anorth2

anorth2 Dec 7, 2017

So you're saying it's valid for kubectl get all to return my replicasets, pods, deployments twice, and no services......

anorth2 commented Dec 7, 2017

So you're saying it's valid for kubectl get all to return my replicasets, pods, deployments twice, and no services......

k8s-merge-robot added a commit that referenced this issue Jan 15, 2018

Merge pull request #58301 from liggitt/all-category-single-group
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Limit all category to apps group for ds/deployment/replicaset

There's lots of confusion around the resources we are moving out of the extensions api group appearing twice when using the `kubectl get all` category. Fortunately, we control that category serverside. For resources that appear in multiple API groups (deployments, daemonsets, replicasets), this updates the server to only include the `apps` resources in the `all` category, so they only appear once.

Fixes kubernetes/kubectl#189
Fixes kubernetes/kubectl#167
Fixes #55720
Fixes #57270
Fixes #57931

```release-note
NONE
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment