Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve error message for missing resources (no matches for kind ... in version ...) #1118

Closed
ringerc opened this issue Sep 20, 2021 · 3 comments · Fixed by kubernetes/kubernetes#107363
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@ringerc
Copy link

ringerc commented Sep 20, 2021

What would you like to be added:

I'd like to see a better error from kubectl when an API request fails due to a missing resource. Instead of

unable to recognize "STDIN": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1"

I suggest elaborating on the server-side message to identify the line number in the input for the start of the yaml document for the failing request as well as the metadata defining the resource, so the user can tell exactly exactly which resource actually failed to deploy. Something like:

kubectl: unable to recognize "STDIN": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1"
        while deploying STDIN:200 (document 8) with metadata { "name": "foo", "namespace": "bar" }
        hint: missing resource { "ApiServices":  "v1.monitoring.coreos.com" }

(Ideally kustomize would inject comments into the YAML stream that identifies the input file(s) that were used to generate the input, like C preprocessor line comments. But that's outside the scope of this request.)

Why is this needed:

kubectl is routinely used to read and deploy streams of many YAML documents from kustomize or similar tools. In these cases it's difficult and confusing to work out exactly which part of the input caused a given error.

You'll get errors like:

unable to recognize "STDIN": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1"
unable to recognize "STDIN": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1"
...

when the required apiservices resource does not exist yet, e.g.

$ kubectl get apiservices/v1.monitoring.coreos.com
Error from server (NotFound): apiservices.apiregistration.k8s.io "v1.monitoring.coreos.com" not found

Importantly, there's very little to tell the user that:

  • The error indicates that an ApiServices object named v1.monitoring.coreos.com was required by the request, but did not exist on the server
  • The error is a response from the k8s server to a kubectl request
  • The error arose in response to a request to create a resource of kind: Alertmanager with metadata.name: main and metadata.namespace: monitoring in input file manifests/alertmanager-alertmanager.yaml

This is particularly troublesome when processing a stream of many YAML documents as produced by tools like kustomize or kubectl's own kubectl build -k` support.

I noticed this as part of #1117 and IMO it's a significant usability issue for new people.

A search https://www.google.com/search?q=kubectl+matches+for+kind+in+version shows a lot of people confused by this error.

@ringerc ringerc added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 20, 2021
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Sep 20, 2021
@ringerc ringerc changed the title Improve error message for missing resources Improve error message for missing resources (no matches for kind ... in version ...) Sep 20, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 19, 2021
@eddiezane
Copy link
Member

/triage accepted
/remove-lifecycle stale
/priority backlog

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jan 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants