Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disabling "Event" resources does not work #23

Closed
petrov-e opened this issue Jan 27, 2023 · 9 comments · Fixed by #38
Closed

Disabling "Event" resources does not work #23

petrov-e opened this issue Jan 27, 2023 · 9 comments · Fixed by #38

Comments

@petrov-e
Copy link

Hi, the option to ignore "Event" resources doesn't seem to work.

I tried to specify in the configuration similarly to other modules:

resourcesToWatch:
  event: false

or

resourcesToWatch:
  events: false

This does not help, I see in the logs that the corresponding module starts despite the configuration:

time="2023-01-27T10:09:55Z" level=info msg="Starting kubewatch controller" pkg=kubewatch-Event

On our large cluster, this gives a lot of noise in notifications to Slack. Thank you in advance!

@petrov-e petrov-e changed the title Disabling "Event" monitoring does not work Disabling "Event" resources does not work Jan 27, 2023
@arikalon1
Copy link
Collaborator

Hi @petrov-e
Thanks for reporting it

What is the chart version you're using?
(You can easily see it with helm list)

@petrov-e
Copy link
Author

@arikalon1

version: 3.3.6

@arikalon1
Copy link
Collaborator

Thanks.
I think there's a bug in that chart version.
Can you please try it with version 3.3.5?

Just add to the helm install command --version 3.3.5

Can you update if you see the same issue on 3.3.5?

@petrov-e
Copy link
Author

There is no such problem in 3.3.5 version of the chart, since a different version of the image is used: docker.io/bitnami/kubewatch:0.1.0-debian-10-r571. There is no kubewatch-Event controller in this version.

But in this version there is a bug described here: #20

"Missing" namespace. It looks like this:

изображение

@arikalon1
Copy link
Collaborator

Thanks for the update @petrov-e
We're looking into it

@RoiGlinik
Copy link
Contributor

@petrov-e you are right that functionality is missing ( to disable events resources ) We will add it to the next version.
We just released 3.3.7 with a fix to the name space image you have added

@backaf
Copy link

backaf commented Feb 21, 2023

Just deployed kubewatch in a development cluster and started to see the same issue. My events are related to the max dns search list:

Search Line limits were exceeded, some search paths have been omitted, the applied search line is: monitoring.svc.cluster.local svc.cluster.local cluster.local my.corp.com your.corp.com our.corp.com

This isn't easy to fix on our side since the max dns search is a current limitation. There is a feature gate available in 1.23 but it's in alpha:

https://v1-23.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/
https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/

Certain events are definitely helpful, it would perhaps be better to exclude certain events from the notifications?

@arikalon1
Copy link
Collaborator

Thanks for reporting it @backaf

Would you be willing to contribute a PR for that enhancement? (adding exclusion filters to kubewatch)

Alternatively, if you're only interested in k8s warning events, if can you send the via Robusta, you could do the filtering there.

@backaf
Copy link

backaf commented Feb 22, 2023

I'm definitely not experienced enough for such a PR, otherwise it would have sounded like a nice challenge :)

Currently I'm hitting the Teams webhook rate limits so I won't be able to use kubewatch in our clusters. I will take a look at Robusta, I didn't know about it! Thanks!

@arikalon1 arikalon1 linked a pull request Mar 23, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants