-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[prometheus-kube-stack] "Error on ingesting out-of-order result from rule evaluation" #1177
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
/no-stale |
Still exists in 18.0.0. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
/no-stale |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
/no-stale |
I can provide more information if you need. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
/no-stale |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
/no-stale |
Can confirm, same here. Also related to #1283, with a possible workaround. EDIT: Also related in my case kubernetes-monitoring/kubernetes-mixin#392 and https://docs.microfocus.com/itom/HCMX:2021.05/PrometheusManyToManyMatching . |
I think setting |
Forgot the quotes at workaround :P In my specific case, resolving the many to many also resolved out of order result after a pod restart. In my case it all refers to an old/dirty installation of Prometheus. In your context, does a new cluster have the same problems? I am running K8S 1.19-1.21 with kube-prometheus-stack-23.3.2. |
It is the case for a new installation, yes. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
/no-stale |
@antoineozenne take a look at: #1799 Basically, setting |
Thank you @bryanasdev000, will use that. |
Describe the bug
There is some warn errors in logs:
This seems to trigger the PrometheusMissingRuleEvaluations alert.
Version of Helm and Kubernetes:
Helm Version:
Kubernetes Version:
Which chart: kube-prometheus-stack
Which version of the chart: 16.14.1
What happened:
The record
cluster_quantile:apiserver_request_duration_seconds:histogram_quantile
, defined inkube-apiserver.rules.yaml
andkube-apiserver-histogram.rules.yaml
, contains a lot ofNaN
values (because of some0
values in the instant-vector inhistogram_quantile
). This triggers PrometheusMissingRuleEvaluations alert.What you expected to happen:
Handle the rule to not trigger alert.
Anything else:
I noticed this commit redefines this record in a second group of rules
f501c4ed62c9e77cf96b46e83202f6ea17a13b97
(kube-apiserver-histogram.rules
in addition tokube-apiserver.rules
).The text was updated successfully, but these errors were encountered: