Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reading nonexistent component status via API causes internal server error #101348

Closed
matusf opened this issue Apr 22, 2021 · 9 comments
Closed

Reading nonexistent component status via API causes internal server error #101348

matusf opened this issue Apr 22, 2021 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@matusf
Copy link

matusf commented Apr 22, 2021

What happened:

Hello @viralpoetry and I were fuzzing the k8s (via openapi-fuzzer) and found out following bug. Reading nonexistent component status via API causes internal server error. The endpoint is /api/v1/componentstatuses/{name}

What you expected to happen:

Response with non-500 HTTP status code

How to reproduce it (as minimally and precisely as possible):

We were fuzzing k8s locally via minikube.

  1. start minukube
minikube start --driver virtualbox
  1. install ca certificates
sudo cp ~/.minikube/ca.pem /usr/local/share/ca-certificates/
sudo update-ca-certificates
  1. get k8s IP address
kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}'
  1. add the IP to /etc/hosts as minikubeca
  2. get access token and export it as token
kubectl create clusterrolebinding permissive-binding \
  --clusterrole=cluster-admin \
  --user=admin \
  --user=kubelet \
  --group=system:serviceaccounts

kubectl create serviceaccount openapifuzz
kubectl get serviceaccounts openapifuzz -o json | jq -r .secrets[].name
kubectl get secret openapifuzz-token-sn9qm -o json | jq -r .data.token
 export token=<retrieved-token>
  1. send request
 curl -X GET -H "authorization: Bearer $token" 'https://minikubeca:8443/api/v1/componentstatuses/x'

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Component not found: x",
  "code": 500
}

Environment:

  • Kubernetes version (use kubectl version):
clientVersion:
  buildDate: "2021-04-08T16:31:21Z"
  compiler: gc
  gitCommit: cb303e613a121a29364f75cc67d3d580833a7479
  gitTreeState: clean
  gitVersion: v1.21.0
  goVersion: go1.16.1
  major: "1"
  minor: "21"
  platform: linux/amd64
serverVersion:
  buildDate: "2021-01-13T13:20:00Z"
  compiler: gc
  gitCommit: faecb196815e248d3ecfb03c680a4507229c2a56
  gitTreeState: clean
  gitVersion: v1.20.2
  goVersion: go1.15.5
  major: "1"
  minor: "20"
  platform: linux/amd64
  • Cloud provider or hardware configuration: minikube with virtual-box driver
  • OS (e.g: cat /etc/os-release): Ubuntu 18.04.5 LTS (bionic)
  • Kernel (e.g. uname -a): Linux pine 4.15.0-140-generic Refactor controller manager. #144-Ubuntu SMP Fri Mar 19 14:12:35 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: downloaded precompiled binary
  • minikube version: v1.19.0 (commit: 15cede53bdc5fe242228853e737333b09d4336b5)
@matusf matusf added the kind/bug Categorizes issue or PR as related to a bug. label Apr 22, 2021
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Apr 22, 2021
@k8s-ci-robot
Copy link
Contributor

@matusf: The label(s) sig/api, sig/machinery cannot be applied, because the repository doesn't have them.

In response to this:

/sig API Machinery

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@matusf
Copy link
Author

matusf commented Apr 22, 2021

/sig api-machinery

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Apr 22, 2021
@pacoxu
Copy link
Member

pacoxu commented Apr 22, 2021

< warning: 299 - "v1 ComponentStatus is deprecated in v1.19+"

As ComponentStatus is deprecated in v1.19+ since #93570 and will be removed in future release, I am not sure whether we need to fix it in 1.22.

@caesarxuchao
Copy link
Member

I agree with @pacoxu that we don't need to fix it immediately. Let's wait for more evidence if it's urgent to fix.

@caesarxuchao caesarxuchao added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Apr 22, 2021
@jpbetz
Copy link
Contributor

jpbetz commented Apr 27, 2021

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Apr 27, 2021
@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 27, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 26, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

6 participants