Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plural name for CRD doesn't work with kubectl #12

Closed
mirshahriar opened this issue Aug 31, 2017 · 16 comments
Closed

Plural name for CRD doesn't work with kubectl #12

mirshahriar opened this issue Aug 31, 2017 · 16 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@mirshahriar
Copy link

My CRD for Elasticsearch

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  labels:
    app: kubedb
  name: elasticsearches.kubedb.com
spec:
  group: kubedb.com
  names:
    kind: Elasticsearch
    listKind: ElasticsearchList
    plural: elasticsearches
    shortNames:
    - es
    singular: elasticsearch
  scope: Namespaced
  version: v1alpha1
status:
  acceptedNames:
    kind: Elasticsearch
    listKind: ElasticsearchList
    plural: elasticsearches
    shortNames:
    - es
    singular: elasticsearch
  conditions:
  - lastTransitionTime: null
    message: no conflicts found
    reason: NoConflicts
    status: "True"
    type: NamesAccepted
  - lastTransitionTime: 2017-08-31T07:10:38Z
    message: the initial names have been accepted
    reason: InitialNamesAccepted
    status: "True"
    type: Established

None of the following commands is working

$ kubectl get elasticsearch
Error from server (NotFound): Unable to list "elasticsearchs": the server could not find the requested resource (get elasticsearchs.kubedb.com)

kubectl get elasticsearches
the server doesn't have a resource type "elasticsearches"

kubectl get es
the server doesn't have a resource type "elasticsearches"

But I got correct SelfLink

/apis/kubedb.com/v1alpha1/elasticsearches

Here, I have used elasticsearches as plural form of elasticsearch.

How can I fix this and use elasticsearches as plural for kind Elasticsearch?

@nikhita
Copy link
Member

nikhita commented Aug 31, 2017

Similar to kubernetes/kubernetes#51639.
Looks like kubectl uses the old pluralization code...

@nikhita
Copy link
Member

nikhita commented Sep 19, 2017

This is fixed in kubernetes/kubernetes#50012.

@frankgreco
Copy link

@nikhita do you know what release of kubectl this made it into?

@nikhita
Copy link
Member

nikhita commented Oct 4, 2017

@nikhita do you know what release of kubectl this made it into?

So the pluralization was not a bug in kubectl after all, it was an apimachinery issue. This is fixed in 1.8.

@frankgreco
Copy link

Awesome! Now the release-1.7 branch of this project uses the 917740426ad66ff818da4809990480bcc0786a77 commit of apimachinery. Hence, in order for CRDs to work properly e2e with Kubernetes version less than 1.8, wouldn't this fix need to be backported to the release-1.7 branch of this project?

@frankgreco
Copy link

Or, is it safe to use the release-1.8 branch of this project with Kubernetes v1.7?

@nikhita
Copy link
Member

nikhita commented Oct 4, 2017

Or, is it safe to use the release-1.8 branch of this project with Kubernetes v1.7?

Not so sure about this.

wouldn't this fix need to be backported to the release-1.7 branch of this project?

I just checked - it has been backported to 1.7 in kubernetes/kubernetes#52545. :) The changes were made in apimachinery and client-go.

Currently, the release-1.7 branches of kubernetes/apimachinery and kubernetes/client-go repos are not synced with the main kubernetes/kubernetes repo. Once the syncing is done, this fix should be available in both 1.7 and 1.8.

If you want to use this now, sttts/apimachinery and sttts/client-go are upto date. The syncing should be done soon.

@frankgreco
Copy link

Awesome, so if I understand correctly, the steps that would need to be completed before this is working e2e in my cluster would be:

  1. Wait for the kubernetes/kubernetes repo to be synced with both kubernetes/apimachinery and kubernetes/client-go@release-4.0
  2. kubernetes/apiextensions-server@release-1.7 will need to update it's dependencies so that it is using the fixed version of kubernetes/apimachinery from the previous step.
  3. My project will update it's dependencies to reflect the above changes.
  4. A new patch version of kubernetes/kubernetes will need to be cut so that kubernetes/kubernetes#52545 is reflected.
  5. Update my cluster to use 1.7.new and then everything should work.

@nikhita
Copy link
Member

nikhita commented Oct 4, 2017

@frankgreco Sounds perfect! :)

@frankgreco
Copy link

Any update on this? If I understand correctly, there's not really a workaround for this as it requires a patch release to kubernetes/kubernetes and so any CRDs affected by this issue will not be able to be applied.

@nikhita
Copy link
Member

nikhita commented Oct 8, 2017

@frankgreco The fix is present in 1.7.8: https://github.com/kubernetes/kubernetes/blob/v1.7.8/staging/src/k8s.io/apimachinery/pkg/api/meta/restmapper.go. :)

However, k8s.io/apiextensions-apiserver is not upto date. The bot required to sync k8s.io/apiextensions-apiserver with kubernetes/kubernetes is currently being fixed now. Should be fixed soon afaik.

@nikhita
Copy link
Member

nikhita commented Oct 16, 2017

@frankgreco k8s.io/apiextensions-apiserver has been synced now! 🎉

@frankgreco
Copy link

@nikhita I see that the master brach was updated, but the release-1.7 and release-1.8 branches are still old. I'd assume that these branches will be updated shortly?

@nikhita
Copy link
Member

nikhita commented Oct 17, 2017

@frankgreco Thanks for notifying about this!

I'd assume that these branches will be updated shortly?

Yes, they should be updated soon. 1.8 is updated (there have been no changes on the branch after the last commit), 1.7 needs to be updated.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 15, 2018
@liggitt
Copy link
Member

liggitt commented Jan 15, 2018

/close
works against 1.9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

6 participants