Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Failed to list *v1beta1.Ingress: the server could not find the requested resources #204

Closed
daegeun-ha opened this issue Oct 18, 2019 · 8 comments
Labels
bug

Comments

@daegeun-ha
Copy link

@daegeun-ha daegeun-ha commented Oct 18, 2019

Describe the bug
When we get logs from running botkube pod, the error msg('E1018 06:30:18.211393 1 reflector.go:123] pkg/mod/k8s.io/client-go@v0.0.0-20190918160344-1fbdaa4c8d90/tools/cache/reflector.go:96: Failed to list *v1beta1.Ingress: the server could not find the requested resource') is repeated infinitely.

I think it is some problem of version difference. Because I saw related news about this. (https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/)
Now our kubernetes version is:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:46:57Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

How can I fix it?
To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.
v1beta1 ingress error

Additional context
Because we are running global kubernetes cluster, now I cannot upgrade version of the kubernetes.. I think it it'll be nice if I can change some library in codes.

@daegeun-ha daegeun-ha added the bug label Oct 18, 2019
@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Oct 18, 2019

@daegeun-ha as you know, after K8s 1.16, extensions apigroup is no longer valid. You should be migrating your resource specs to use correct apigroup.
Now coming back to error you are seeing in the logs, we have already added Prerequisites on the installation page. https://www.botkube.io/installation/

Prerequisites:
- Kubernetes 1.14 or higher is recommended
- For Kubernetes < 1.14, BotKube won’t be able to monitor Ingress resources

Since the endpoint of the Ingress resource has been changed, it needs at least K8s v1.14 version to work. This error has nothing to do with other functionalities, you can still monitor other resources. You can ignore this message or remove ingress resource from BotKube configuration.
We are planning to use dynamic informers in the next release. But not sure if that will solve the problem.

@daegeun-ha

This comment has been minimized.

Copy link
Author

@daegeun-ha daegeun-ha commented Oct 18, 2019

Thank you for reply. However, after I removed ingress resource from BotKube configuration, error msgs still appear.
no ingress
(There is no informer of ingress)
Is there another something I can do for it?

@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Oct 18, 2019

What error do you see after removing ingress from the configuration?

@daegeun-ha

This comment has been minimized.

Copy link
Author

@daegeun-ha daegeun-ha commented Oct 18, 2019

Same errors are printed (Actually the upper error msgs are printed after removing ingress from the configuration..!)

@daegeun-ha

This comment has been minimized.

Copy link
Author

@daegeun-ha daegeun-ha commented Oct 18, 2019

Oh, now I got it.
After remove 'ingress' key and value from ResourceInformerMap (/pkg/utils/utils.go), error msgs don't appear now. Thank you for kind replies. :)

@PrasadG193

This comment has been minimized.

Copy link
Member

@PrasadG193 PrasadG193 commented Oct 18, 2019

The recommended fix would be to upgrade your cluster (for security and other reasons)

If you still want to use older apigroups, the hackish fix would be to revert this change: 8835dea#diff-fb81b0a96eed5b010cfb081b3aef4431L106

@daegeun-ha

This comment has been minimized.

Copy link
Author

@daegeun-ha daegeun-ha commented Oct 18, 2019

Thank you, I'll try it!

@daegeun-ha

This comment has been minimized.

Copy link
Author

@daegeun-ha daegeun-ha commented Oct 18, 2019

It'll be great I can upgrade the cluster, but I'm just a new person on this company. By the way, your suggestion works well. Thank you again!

@daegeun-ha daegeun-ha closed this Oct 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.