Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upNaN instead of proper values in metrics #860
Comments
This comment has been minimized.
This comment has been minimized.
|
If there's no requests in a period, then it's not possible to calculate the percentile. NaN is reported instead in that case. Does that answer your question? |
This comment has been minimized.
This comment has been minimized.
|
Sorry - I'm not sure if I understand in what period. |
This comment has been minimized.
This comment has been minimized.
|
If it has been a while since the requests, then that is expected. If you graph the percentile values you'll see they had a non-nan value around the time of the requests. |
This comment has been minimized.
This comment has been minimized.
|
So those percentiles are computed only from some time period (not from all the values requests so far - i.e. since starting the component)? If so, what is the value of that period and can this be configured? |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
See prometheus/client_golang#85 and https://github.com/prometheus/client_golang/blob/fcd2986466589bcf7a411ec3b52d85a8df9dcc8b/prometheus/summary.go#L118 If you want to control things at that level, I'd suggest a Histogram rather than a Summary. |
This comment has been minimized.
This comment has been minimized.
|
I see - thanks for quick response! |
wojtek-t
closed this
Jun 26, 2015
wojtek-t
referenced this issue
Jun 26, 2015
Closed
Fix prometheus metrics used for monitoring performance. #10389
This comment has been minimized.
This comment has been minimized.
quinton-hoole-zz
commented
Jun 30, 2015
|
Hello @brian-brazil . Small world :-) |
mikkeloscar
referenced this issue
Apr 11, 2017
Closed
kube-apiserver metrics apiserver_request_latencies_summary NaN after upgrading to 1.6.1 #44329
This comment has been minimized.
This comment has been minimized.
ashoksahoo
commented
May 9, 2017
•
|
How do I filter out the NaN, I am going a group by (metrics_type) and for some types its giving NaN. I am using value > 0. |
This comment has been minimized.
This comment has been minimized.
r4j4h
commented
May 10, 2017
|
@ashoksahoo As far as I know that is the proper way. If you need to catch negatives as well you can combine with |
This comment has been minimized.
This comment has been minimized.
macnibblet
commented
May 25, 2017
|
I'm having the same problem as @ashoksahoo My query The problem is I have 60+ servers and some of them don't get hit every now and then which means I'm almost never getting any graphs displayed because the result of the query above is Nan |
This comment has been minimized.
This comment has been minimized.
bencoughlan
commented
Jun 19, 2017
|
Anything available in this that we can pass in to have it default to 0 if NaN.nfinity shows up? |
tirkarthi
referenced this issue
Jul 19, 2017
Closed
[Bug] Making a query with division involving null values causes Internal server error #8860
This comment has been minimized.
This comment has been minimized.
Keith-Ball
commented
Jan 23, 2018
|
Per grafana/grafana#8860 (comment); It looks like there is a workaround of simply checking that the rate values are >= 0. |
soamvasani
referenced this issue
Oct 5, 2018
Open
Change function_call_duration metric to be a histogram instead of a summary #920
This comment has been minimized.
This comment has been minimized.
MikeSpreitzer
commented
Jan 19, 2019
|
Yes, Histograms are better than Summaries --- particularly because they aggregate better. |
wojtek-t commentedJun 26, 2015
We're using Prometheus in Kubernetes project. However, we quite often observe NaN instead of a proper value in out metrics, e.g.:
What is interesting, the sum and count is always ok - the problem is only with percentiles.
This particular metric is defined as following:
Do you know why is this happening?