-
Notifications
You must be signed in to change notification settings - Fork 533
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check watch terminations from clusters loader tests #2054
Comments
I believe the easiest way is to add new prometheus query to GenericPrometheusQuery here:
instead of creating new measurement. |
/assign |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/assign |
The only missing thing now is to enable the new measurement in our tests, right? |
Enabling measurement and then based on results possibly adding alerting to it. |
One of the important metrics that may suggest overload of the control plane is the number of watches that are closed by kube-apiserver because they don't keep up (or watchcache itself is not keeping).
We want to add a check to our tests that will be validating if this metrics is not too high.
Metrics to exercise:
The easiest way to do it is probably add to prometheus-based measurement, but @marseel to confirm.
The text was updated successfully, but these errors were encountered: