Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upProvide more information about recording rules. #3797
Comments
This comment has been minimized.
This comment has been minimized.
|
Have you tried using 2.1? It has most of these features. |
This comment has been minimized.
This comment has been minimized.
|
I've seen that 2.1 does have evaluation time, and that is great ! But AFAIK:
Are these things that could be added ? |
This comment has been minimized.
This comment has been minimized.
They've had that for a long time, scrape_samples_scraped and scrape_duration_seconds.
Why do you want the last evaluation time? |
This comment has been minimized.
This comment has been minimized.
Very good point, it would be nice to have them somewhere in the target UI too. But at least that's already something we can play with now.
When you see that the output is missing, if you have both input/output size + evaluation time, you can super quickly understand why you don't have your new values (either input is empty of evaluation time is late). Basically, this highlights the fact that you might be late (because you have too many rules, not enough CPU, etc..). The UI could even display part of what is in |
This comment has been minimized.
This comment has been minimized.
The intended way to spot that is the last evaluation duration is greater than the interval, which are both on /metrics. |
This comment has been minimized.
This comment has been minimized.
|
Ok, fair enough, do you still it would still be worth it to put a small red/green indicator somewhere in the rule page to show that the last evaluation is greater than the interval ? Also, given your rationale, I'm not sure why we keep the last target scrape time in the target UI (which is essentially the same thing, people use it at startup to know if their target as been scrapped or not yet). I guess one could argue that scrape failure happen way more often than evaluation failures. And would you be ok to add some input/output set size per rules somewhere (either only in the UI, or as a metric like the one for targets). I'd like to have it in the UI, but if we do that it might also make sense to put the same number in the target UI. |
This comment has been minimized.
This comment has been minimized.
|
It's a brand new feature, I think it's a bit early to consider adding more to the UI. Evaluation failure would mean Prometheus is broken, while scrape failures are normal. @gouthamve was looking at adding some instrumentation around that, thus far the duration seems to catch most of the expensive rules. |
This comment has been minimized.
This comment has been minimized.
|
Fine, let's wait for it to settle down a little bit, and add more things later. I'll build what is missing in additional Grafana dashboards for now. |
This comment has been minimized.
This comment has been minimized.
|
Closing for now. |
brian-brazil
closed this
Mar 8, 2018
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
iksaif commentedFeb 5, 2018
•
edited
This is something that has been asked by quite a few or our users:
Basically when their instance suddenly has a lot more timeseries or slows down they try to understand what is causing that. We already have some per-target information, I think it would be great to add some of these metadata to recording rules too.
scrape.target.Targetalready haslastErrorandlastScrape, we could addlastNumMetricsandlastScrapeTime.Rules (both alerts and recording rules) recently got
GetEvaluationTime, we could addGetLastEvaluationTime()andGetLastOutputMetricsNum()(and input) ?If both objects export very similar things we could even uniformize the UI (and API) for these.
Is there any work in progress regarding this? Would you be opposed if we submitted a patch in this direction ? Thanks !