Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"kubectl explain" should be able to explain "apiservices" and "customresourcedefinition" #49465

Closed
xiangpengzhao opened this issue Jul 24, 2017 · 15 comments · Fixed by #53228
Closed
Assignees
Labels
area/custom-resources kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@xiangpengzhao
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:

# kubectl explain apiservices
error: group apiregistration.k8s.io has not been registered
# kubectl explain customresourcedefinition
error: group apiextensions.k8s.io has not been registered

What you expected to happen:
kubectl explain should be able to explain the above two resources.

Print something as below, etc.

DESCRIPTION:
APIService represents a server for a particular GroupVersion. Name must be "version.group".

FIELDS:
......

How to reproduce it (as minimally and precisely as possible):
Run the commands above.

Anything else we need to know?:
/sig cli
/sig api-machinery

Environment:

  • Kubernetes version (use kubectl version): a9bf441
  • Cloud provider or hardware configuration**:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Jul 24, 2017
@dixudx
Copy link
Member

dixudx commented Jul 24, 2017

/cc

@xiangpengzhao
Copy link
Contributor Author

@kubernetes/sig-cli-bugs

@k8s-ci-robot
Copy link
Contributor

@xiangpengzhao: Reiterating the mentions to trigger a notification:
@kubernetes/sig-cli-bugs.

In response to this:

@kubernetes/sig-cli-bugs

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@caesarxuchao
Copy link
Member

cc @mbohlool

@mbohlool
Copy link
Contributor

This look like a problem with old swagger 1.2. We do not aggregate swagger 1.2 spec. When kubectl move to OpenAPI, it should by fix this by design.

k8s-github-robot pushed a commit that referenced this issue Oct 4, 2017
Automatic merge from submit-queue (batch tested with PRs 53228, 53232, 53353). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Openapi explain

**What this PR does / why we need it**:
This rewrites `kubectl explain` but using openapi rather than swagger 1.2. Also removes the former code.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #49465, fixes partially #44589, fixes partially #38637

**Special notes for your reviewer**:
FYI @mbohlool 

**Release note**:
```release-note
`kubectl explain` now uses openapi rather than swagger 1.2.
```
@nikhita
Copy link
Member

nikhita commented Jan 16, 2018

This is still not fixed (even after kubectl moving to OpenAPI).

There are two problems here:

  1. kubectl explain uses groups registered into a group registry. This can be fixed by:
--- a/pkg/kubectl/cmd/explain.go
+++ b/pkg/kubectl/cmd/explain.go
@@ -26,7 +26,6 @@ import (
 	"k8s.io/kubernetes/pkg/kubectl/cmd/templates"
 	cmdutil "k8s.io/kubernetes/pkg/kubectl/cmd/util"
 	"k8s.io/kubernetes/pkg/kubectl/explain"
-	"k8s.io/kubernetes/pkg/kubectl/scheme"
 	"k8s.io/kubernetes/pkg/kubectl/util/i18n"
 )
 
@@ -91,7 +90,6 @@ func RunExplain(f cmdutil.Factory, out, cmdErr io.Writer, cmd *cobra.Command, ar
 		return err
 	}
 
-	// TODO: We should deduce the group for a resource by discovering the supported resources at server.
 	fullySpecifiedGVR, groupResource := schema.ParseResourceArg(inModel)
 	gvk := schema.GroupVersionKind{}
 	if fullySpecifiedGVR != nil {
@@ -104,20 +102,12 @@ func RunExplain(f cmdutil.Factory, out, cmdErr io.Writer, cmd *cobra.Command, ar
 		}
 	}
 
-	if len(apiVersionString) == 0 {
-		groupMeta, err := scheme.Registry.Group(gvk.Group)
-		if err != nil {
-			return err
-		}
-		apiVersion = groupMeta.GroupVersion
-
-	} else {
-		apiVersion, err = schema.ParseGroupVersion(apiVersionString)
-		if err != nil {
+	if len(apiVersionString) != 0 {
+		if apiVersion, err = schema.ParseGroupVersion(apiVersionString); err != nil {
 			return err
 		}
+		gvk = apiVersion.WithKind(gvk.Kind)
 	}
-	gvk = apiVersion.WithKind(gvk.Kind)
 
 	resources, err := f.OpenAPISchema()
 	if err != nil {

However, after this it gives:

error: Couldn't find resource for "apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition"
  1. This occurs because of incorrect openAPI schema for apiextensions and apiregisteration.

kubectl explain looks up the gvk of the resource in question from a list of resources:

func (d *document) LookupResource(gvk schema.GroupVersionKind) proto.Schema {
modelName, found := d.resources[gvk]
if !found {
return nil
}
return d.models.LookupModel(modelName)
}

It gets the gvk here:

gvk := parseGroupVersionKind(model)

parseGroupVersionKind finds the gvk through groupVersionKindExtensionKey:

// Get the extensions
gvkExtension, ok := extensions[groupVersionKindExtensionKey]
if !ok {
return schema.GroupVersionKind{}
}

Here, groupVersionKindExtensionKey is "x-kubernetes-group-version-kind" and is the key used to lookup the GroupVersionKind value for an object definition from the definition's "extensions" map.

However, we do not generate x-kubernetes- extensions for apiextensions-apiserver and aggregator yet (#52741), due to which it fails.

/reopen

@k8s-ci-robot
Copy link
Contributor

@nikhita: you can't re-open an issue/PR unless you authored it or you are assigned to it.

In response to this:

This is still not fixed (even after kubectl moving to OpenAPI).

There are two problems here:

  1. kubectl uses groups registered into a group registry. This can be fixed by:
--- a/pkg/kubectl/cmd/explain.go
+++ b/pkg/kubectl/cmd/explain.go
@@ -26,7 +26,6 @@ import (
	"k8s.io/kubernetes/pkg/kubectl/cmd/templates"
	cmdutil "k8s.io/kubernetes/pkg/kubectl/cmd/util"
	"k8s.io/kubernetes/pkg/kubectl/explain"
-	"k8s.io/kubernetes/pkg/kubectl/scheme"
	"k8s.io/kubernetes/pkg/kubectl/util/i18n"
)

@@ -91,7 +90,6 @@ func RunExplain(f cmdutil.Factory, out, cmdErr io.Writer, cmd *cobra.Command, ar
		return err
	}

-	// TODO: We should deduce the group for a resource by discovering the supported resources at server.
	fullySpecifiedGVR, groupResource := schema.ParseResourceArg(inModel)
	gvk := schema.GroupVersionKind{}
	if fullySpecifiedGVR != nil {
@@ -104,20 +102,12 @@ func RunExplain(f cmdutil.Factory, out, cmdErr io.Writer, cmd *cobra.Command, ar
		}
	}

-	if len(apiVersionString) == 0 {
-		groupMeta, err := scheme.Registry.Group(gvk.Group)
-		if err != nil {
-			return err
-		}
-		apiVersion = groupMeta.GroupVersion
-
-	} else {
-		apiVersion, err = schema.ParseGroupVersion(apiVersionString)
-		if err != nil {
+	if len(apiVersionString) != 0 {
+		if apiVersion, err = schema.ParseGroupVersion(apiVersionString); err != nil {
			return err
		}
+		gvk = apiVersion.WithKind(gvk.Kind)
	}
-	gvk = apiVersion.WithKind(gvk.Kind)

	resources, err := f.OpenAPISchema()
	if err != nil {

However, after this it gives:

error: Couldn't find resource for "apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition"
  1. This occurs because of incorrect openAPI schema. The schema does not contain apiextensions and apiregisteration.

kubectl explain looks up the gvk of the resource in question from a list of resources:

func (d *document) LookupResource(gvk schema.GroupVersionKind) proto.Schema {
modelName, found := d.resources[gvk]
if !found {
return nil
}
return d.models.LookupModel(modelName)
}

It gets the gvk here:

gvk := parseGroupVersionKind(model)

parseGroupVersionKind finds the gvk through groupVersionKindExtensionKey:

// Get the extensions
gvkExtension, ok := extensions[groupVersionKindExtensionKey]
if !ok {
return schema.GroupVersionKind{}
}

Here, groupVersionKindExtensionKey is "x-kubernetes-group-version-kind" and is the key used to lookup the GroupVersionKind value for an object definition from the definition's "extensions" map.

However, we do not generate x-kubernetes- extensions for apiextensions-apiserver and aggregator yet (#52741), due to which it fails.

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@xiangpengzhao
Copy link
Contributor Author

/reopen

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 17, 2018
@nikhita
Copy link
Member

nikhita commented Apr 17, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 17, 2018
@nikhita
Copy link
Member

nikhita commented Jun 1, 2018

Looks like the blocker was fixed in #64174 🎉

I will move forward with the fix.

@nikhita
Copy link
Member

nikhita commented Jun 1, 2018

/assign

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 30, 2018
@nikhita
Copy link
Member

nikhita commented Sep 3, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2018
@liggitt
Copy link
Member

liggitt commented Nov 24, 2018

fixed in 1.11 in #64174

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/custom-resources kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants